AI News, Deep Learning for Disaster Recovery

Deep Learning for Disaster Recovery

With global climate change, devastating hurricanes are occurring with higher frequency.

Using state of the art computer vision deep learning methods, the system automatically annotates flooded, washed out, or otherwise severely damaged roads from satellite imagery.

Then I used my model to compare road segmentation on pre-flood satellite imagery against post-flood satellite imagery to detect road changes or anomalies.

Algorithm Overview: Pre-Flood (PreM) Post-Flood (PostM) Finally, to generate the annotated tile overlay, the non-zero-valued pixels from the last step are shaded with opaque red while the zero-valued pixels are made fully transparent so the layer can be rendered on top of any base map for presentation.

I aimed to train a single robust model that is able to take full advantage of high resolution satellite imagery but still can perform reasonably well on low resolution imagery.

Since the network is essentially performing binary classification (road pixels vs non-road pixels) for each pixel of the input image, sigmoid activation is used as the final output layer of the U-Net.

Adam, the model optimizer, was initialized with a default learning rate (0.0001) and the optimizer automatically reduces learning rate by a factor of ten when the cost function for the validation set fails to decrease for two consecutive epochs.

The training images were essentially taken under ideal atmospheric condition (cloud free and blur free) whereas post hurricane images are slightly blurry and noisy.

In addition, to increase the size of our training data set, each satellite/street map pair is randomly flipped and rotated at right angles to generate 7 additional variants of the image.

Please note that the best model on hurricane images (second to last row) does not obtain the best Dice score on the validation set, since we only have clean satellite images available for validation.

Lastly, in order to serve the results to my website, static annotated map tiles in PNG format are published to Amazon S3 and served from CloudFront CDN to reduce network latency and increase scalability.

Deep learning for satellite imagery via image segmentation

In the recent Kaggle competition Dstl Satellite Imagery Feature Detection our deepsense.ai team won 4th place among 419 teams.

The distribution of classes was uneven: from very common, such as crops (28% of the total area) and trees (10%), to much smaller such as roads (0.8%) or vehicles (0.02%).

Our fully convolutional model was inspired by the family of U-Net architectures, where low-level feature maps are combined with higher-level ones, which enables precise localization.

Firstly, we can allow the network to lose some information after the downsampling layer because the model has access to low level features in the upsampling path.

Secondly, in satellite images there is no concept of depth or high-level 3D objects to understand, so a large number of feature maps in higher layers may not be critical for good performance.

We developed separate models for each class, because it was easier to fine tune them individually for better performance and to overcome imbalanced data problems.

Depending on class we left preprocessed images unchanged or resized them together with corresponding label masks to 1024 x 1024 or 2048 x 2048 squares.

During training we collected a batch of cropped 256 x 256 patches from different images where half of the images always contained some positive pixels (objects of target classes).

To further improve prediction quality we averaged results for flipped and rotated versions of the input image, as well as for models trained on different scales.

For these seven classes we were able to train convolutional networks (separately for each class) with binary cross entropy loss as described above on 20 channels inputs and two different scales (1024 and 2048) with satisfactory results.

The solution for the waterway class was a combination of linear regression and random forest, trained on per pixel data from 20 input channels.

We observed high variation of the results on the local validation and public leaderboard due to the small number of vehicles in the training set.

To combat this we trained models separately for large and small vehicles, as well as single model for both of them (label masks were added together) on 20 channels inputs.

Satellite Image Segmentation: a Workflow with U-Net

Image Segmentation is a topic of machine learning where one needs to not only categorize what’s seen in an image, but to also do it on a per-pixel level.

The thing here is that despite competition winners shared some code to reproduce exactly their winning submission (which was released after I started working on my pipeline), this does not include a lot of the required things to be able to come up with that code in the first place if one want to apply the pipeline to another problem or dataset.

Moreover, building neural networks is an iterative process where one needs to start somewhere and slowly increase a metric / evaluation score by modifying the neural network architecture and hyperparameters (configuration).

First off, here is a glance at the training data of the competition: The DSTL’s Satellite Imagery Feature Detection Challenge is a challenge where participants need to code a model capable of doing those predictions — the images just above are taken from the dataset, it represents an (X, Y) pair example from the training data.

If red, green and blue represents 3 bands (RGB), the 20 bands contain a lot more information for the neural network to ponder upon, and hence ease learning and the quality of the predictions: it is aware of visual features that humans do not see naturally.

U-Net is like a convolutional autoencoder, But it also has skip-like connections with the feature maps located before the bottleneck (compressed embedding) layer, in such a way that in the decoder part some information comes from previous layers, bypassing the compressive bottleneck.

See the figure below, taken from the official paper of the U-Net: Thus, in the decoder, data is not only recovered from a compression, but is also concatenated with the information’s state before it was passed into the compression bottleneck so as to augment context for the next decoding layers to come.

That way, the neural networks still learns to generalize in the compressed latent representation (located at the bottom of the “U” shape in the figure), but also recovers its latent generalizations to a spatial representation with the proper per-pixel semantic alignment in the right part of the U of the U-Net.

The 3rd place winners used the 4 possible 90 degrees rotations, as well as using a mirrored version of those rotation, which can increase the training data 8-fold: this data transformation belongs to the D4 Dihedral group.

The four extra channels (from indexes) are shown below, along with the original image’s RGB human-visible channels for comparison: Here is the U-Net architecture from the 3rd place winners: The only open-source code we found online from winners was from the 3rd place winners.

I started working on the project before the winners’ official code was available publicly, so I made my own development and production pipeline going in that direction, which reveals useful not only for solving the problem, but to code a neural network that can eventually be transferred on other datasets.

The architecture I managed to develop was first derived from public open-source code pre-release before the end of the competition: https://www.kaggle.com/ceperaang/lb-0-42-ultimate-full-solution-run-on-your-hw It seems that a lot of participants developed their architectures on top of precisely this shared code, which has both been very helpful and acted as a bar raiser for participants of the competition to keep up in the leaderboard.

In the following image, our “U”-Net is flipped like a “⊂”-Net, it is automatically made from Keras’ visualization tool, hence why it seems skewed: Hyperopt is used here to automatically figure out the best neural network architecture to best fit the data of the given problem, so this humongous neural network has been grown automatically.

Then running hyperopt takes time, but it proceeds to do what could be compared to using genetic algorithms to perform breeding and natural selection, except that there is no breeding here: just a readjustment from the past trials to try new trials in a way that balances exploration of new architectures versus optimization of the architecture near local maximas of performance.

Once searched, an hyperspace can be refined to narrow down the range in which hyperparameters are tested in the event of having to restart the meta-optimization — and in case some hyperparameters were already too narrow (e.g.: best points are near the limit of the parameter), it is still possible to widen the range.

A team workflow can be interesting where beginners learn to manipulate the data and to launch the optimisation to slowly start modifying the hyperparameter space and eventually add new parameters based on new research.

More details here: https://github.com/Vooban/Smoothly-Blend-Image-Patches.Star Fork Running the improved 3rd place winners’ code, it is possible to get a score worth of the 2nd position because they coded their neural networks from scratch after the competition to make the code public and more useable.

I used the 3rd place winners’ post processing code, which rounds the prediction masks in a smoother way and which corrects a few bugs such as removing building predictions while water is also predicted for a given pixel, and on, but the neural network behind that is still quite custom.

On my side, I have used one neural architecture optimized with hyperopt on all classes at once, to then take this already-evolved architecture to train it on single classes at a time, so there is an imbalance in the bias/variance of the neural networks, each used on a single task rather than on all tasks at once as when meta-optimized.

The One Hundred Layers Tiramisu might not help for fitting on the DSTL’s dataset according to the 3rd place winner, but at least it has a lot of capacity potential in being applied to larger and richer datasets due to the use of the recently discovered densely connected convolutional blocks.

how to extract features from satellite image

extraction of features from satellite image.

How to use machine learning to extract maps from satellite images?

Deeplearning pour la segmentation d'images satellites dans le but d'en extraire les contours des bâtiments.

Creating Realistic Splatmaps From Satellite Images for the Squad SDK

Maptitude 2017 Mapping Imagery, Aerials, Satellite, Photos, Topographic Maps

How to add images to a map and access online image resources.

How to create a 3D Terrain with Google Maps and height maps in Photoshop - 3D Map Generator Terrain

More info: This video shows you how to make a 3D map of almost any location in the world in less ..

Create a Google Earth flythrough from a Strava activity track

A brief description of how to create a birds eye "fly-through" from a Strava activity track downloaded as a .gpx file and opened in Google Earth. I am using Google ...

Build a TensorFlow Image Classifier in 5 Min

In this episode we're going to train our own image classifier to detect Darth Vader images. The code for this repository is here: ...

Scalable Feature Extraction with Aerial and Satellite Imagery | SciPy 2018 | Virginia Ng

In this talk we introduce use cases for feature extraction with aerial and satellite imagery such as turn lane marking detection, road and building footprint ...

HOW TO MAP IN OSM: Advanced training image alignment in JOSM

Like, Comment and/or Subscribe. This short tutorial to show you how you can fix the alignment of a satellite image in the JOSM editor of OpenStreetMap.

Basic Digitization in ArcGIS

The the basics of digitization using the editor toolbar. In this tutorial I show you how to take an aerial photograph and create polygons and lines based on ...