AI News, Machine-learning makes poverty mapping as easy as night and day

Machine-learning makes poverty mapping as easy as night and day

“The system essentially learned how to solve the problem by comparing those two sets of images.” Burke, Ermon and fellow team members David Lobell, an associate professor of Earth system science, undergraduate computer science researcher Michael Xie and electrical engineering PhD student Neal Jean detailed their approach in a paper for the proceedings of the 30th AAAI Conference on Artificial Intelligence.  Their basic technique – directing a model to compare images to predict a specific value – is a variant of machine learning known as transfer learning.

The system did this time and again, making day-to-night comparisons and predictions and constantly reconciling its machine-devised analytical constructs with details it gleaned from the data. “As the model learns, it picks up whatever it associates with increasing light in the nighttime images, compares that to daytime images of the same area, correlates its observations with data obtained from known field-surveyed areas and makes a judgment,” Lobell said.

“We can’t say with certainty what associations it is making, or precisely why or how it is making them.” Ultimately, the researchers believe, this model could supplant the expensive and time-consuming ground surveys currently used for poverty mapping. “This offers an unbelievable opportunity for cheap, scalable and surprisingly accurate measurement of poverty,” Burke said.

More imagery, acquired on a more consistent basis, would be needed to give their system the raw material to take the next step and predict whether locales are inching toward prosperity or getting further bogged down in misery.

I don’t think it will be too long before we’re able to do cheap, scalable, highly accurate mapping in time as well as space.” Even as they consider what they might be able to do with more abundant satellite imagery, the Stanford researchers are contemplating what they could do with different raw data – say, mobile phone activity.

Stanford is Using Machine Learning on Satellite Images to Predict Poverty

However, the process of going around rural areas and manually tracking census data is time consuming, labor intensive and expensive.

Considering that, a group of researchers at Stanford have pioneered an approach that combines machine learning with satellite images to make predicting poverty quicker, easier and less expensive.

Using this machine learning algorithm, the model is able to predict per capita consumption expenditure of a particular location when provided with it’s satellite images.

Before making it’s predictions, the algorithm has been made to cross check it’s results with actual survey data in order to improve it’s accuracy.

Stanford claims that it’s model predicts poverty almost as well as the manually collected data so that makes it a feasible option for the survey administrators.

Combining satellite imagery and machine learning to predict poverty

Our transfer learning model is strongly predictive of both average household consumption expenditure and asset wealth as measured at the cluster level across multiple African countries.

Cross-validated predictions based on models trained separately for each country explain 37 to 55% of the variation in average household consumption across four countries for which recent survey data are available (Fig.

Models trained on pooled consumption or asset observations across all countries (hereafter “pooled model”) perform similarly, with cross-validated predictions explaining 44 to 59% of the overall variation in these outcomes (fig.

This high overall predictive power is achieved despite a lack of temporal labels for the daytime imagery (i.e., the exact date of each image is unknown), as well as imperfect knowledge of the location of the clusters, as up to 10 km of random noise was added to cluster coordinates by the data collection agencies to protect the privacy of survey respondents.

We find that differences in the outcome being measured, rather than differences in survey design or direct identification of key assets in daytime imagery, likely explain these performance differences (see supplementary materials 2.1 and fig.

Finally, asset-estimation performance of our model in Rwanda surpasses performance in a recent study using cell phone data to estimate identical outcomes (11) (cluster-level r2 = 0.62 in that study, and r2 = 0.75 in our study;

To test whether our transfer learning model improves upon the direct use of nightlights to estimate livelihoods, we ran 100 trials of 10-fold cross-validation separately for each country and for the pooled model, each time comparing the predictive power of our transfer learning model to that of nightlights alone.

(C) Comparison of r2 of models trained on correctly assigned images in each country (vertical lines) to the distribution of r2 values obtained from trials in which the model was trained on randomly shuffled images (1000 trials per country).

4, C and D, the r2 values obtained using “correct” daytime imagery are much higher than any of the r2 values obtained from the reshuffled images, for both consumption and assets, indicating that our model’s level of predictive performance is unlikely to have arisen by chance.

Examining whether a particular model generalizes across borders is useful for understanding whether accurate predictions can be made from imagery alone in areas with no survey data—an important practical concern given the paucity of existing survey data in many African countries (see Fig.

These results indicate that, at least for our sample of countries, common determinants of livelihoods are revealed in imagery, and these commonalities can be leveraged to estimate consumption and asset outcomes with reasonable accuracy in countries where survey outcomes are unobserved.

The best way to predict poverty is by combining satellite images with machine learning

To create the model, researchers fed three inputs into a computer: nighttime luminosity data, daytime high-resolution imagery, and actual survey data about measuring two factors: consumption expenditure, which measures household spending, and asset wealth, which includes cars, TVs and other goods owned. The researchers chose five countries from the continent—Nigeria, Tanzania, Uganda, Malawi, and Rwanda—which had recently collected high-quality household data.

Second, the model ingests the actual survey data and checks it against the correlations, learning, for  example, that areas with more cars are also more likely to rare higher on household spending in the surveys.

With more satellite imagery expected to be made publicly available soon, from the European Space Agency and other organizations, modified models can ingest new data and make predictions across time and regions.

But to make predictions about the future, they’d need a bunch of snapshots to see how things have changed over time. In the foreseeable future, the team hopes to “make maps that can update all the time,” Burke says, as more hi-res imagery is made available and patterns emerge over the years.

The new way of tracking poverty can replace door-to-door household surveys, which are typically expensive and “institutionally difficult, as some governments see little benefit in having their lackluster performance documented,” the researchers write.

Machine-learning makes poverty mapping as easy as night and day

“The system essentially learned how to solve the problem by comparing those two sets of images.” Burke, Ermon and fellow team members David Lobell, an associate professor of Earth system science, undergraduate computer science researcher Michael Xie and electrical engineering PhD student Neal Jean detailed their approach in a paper for the proceedings of the 30th AAAI Conference on Artificial Intelligence.  Their basic technique – directing a model to compare images to predict a specific value – is a variant of machine learning known as transfer learning.

The system did this time and again, making day-to-night comparisons and predictions and constantly reconciling its machine-devised analytical constructs with details it gleaned from the data. “As the model learns, it picks up whatever it associates with increasing light in the nighttime images, compares that to daytime images of the same area, correlates its observations with data obtained from known field-surveyed areas and makes a judgment,” Lobell said.

“We can’t say with certainty what associations it is making, or precisely why or how it is making them.” Ultimately, the researchers believe, this model could supplant the expensive and time-consuming ground surveys currently used for poverty mapping. “This offers an unbelievable opportunity for cheap, scalable and surprisingly accurate measurement of poverty,” Burke said.

More imagery, acquired on a more consistent basis, would be needed to give their system the raw material to take the next step and predict whether locales are inching toward prosperity or getting further bogged down in misery.

I don’t think it will be too long before we’re able to do cheap, scalable, highly accurate mapping in time as well as space.” Even as they consider what they might be able to do with more abundant satellite imagery, the Stanford researchers are contemplating what they could do with different raw data – say, mobile phone activity.

Fighting Poverty With Satellite Images and Machine-Learning Wizardry

Researchers have recentlytried to estimate poverty levels by analyzing mobile phone usage data and satellite photos showing nighttimelighting.But mobile phone data are typically not publicly available.

Jean, earth system science professor Marshall Burke, and their colleagues came up with a clever machine-learning method that combines nighttime light intensity data with daytime satellite imagery.

In machine learning, a computer model is fed labeled data sets—say, thousands of images labeled “dog” or “cat.” Much like humans learn by inference after seeing enough examples, the model analyzes certain features in the images and figures out how to classify an animal in apicture as a dog or cat.

This second model learns to estimate a village’s relativelevel of poverty—measured by the consumption expenditures in 2011 U.S. dollarsand an asset-based wealth index.“So you can take an image of any area and predict how poor that area is,” he says.

The new model more accurately estimated poverty levels than models that used only nighttimelight datain areas where the average income was half or even one-third ofthe poverty level.

The team isnow trying to use images with different resolutions, which yield different information—say, building density at low-resor roofing material at high-res—to see how having that information affects the accuracy of poverty estimates.

Combining satellite imagery and machine learning to predict poverty

The elimination of poverty worldwide is the first of 17 UN Sustainable Development Goals for the year 2030. To track progress towards this goal, we need more ...

Neal Jean, " "Combining satellite imagery and machine learning to predict poverty"

Neal Jean, Michael Xie, Stefano Ermon, Matt Davis, Marshall Burke, David Lobell "Combining satellite imagery and machine learning to predict poverty" ...

Stefano Ermon: Satellite images can pinpoint poverty better than surveys

One of the biggest challenges in poverty mitigation is that we don't have good poverty data. Stefano Ermon, assistant professor of computer science and a fellow ...

Stefano Ermon, Deep Learning for Spatial Predictions w/ Applications in Poverty & Agriculture,

Conversational Systems in the Era of Deep Learning and Big Data

Recent research in recurrent neural models, combined with the availability of massive amounts of dialog data, have together spurred the development of a new ...

Using Deep Learning to Extract Feature Data from Imagery

Vector data collection is the most tedious task in a GIS workflow. Digitizing features from imagery or scanned maps is a manual process that is costly, requiring ...

End-to-end Driving via Conditional Imitation Learning

End-to-end Driving via Conditional Imitation Learning Felipe Codevilla, Matthias Müller, Antonio López, Vladlen Koltun, and Alexey Dosovitskiy International ...

Monitoring Volcanoes Using ASTER Satellite Imagery

Download video at: The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) sensor is one of five ..

Agriculture: The Next Machine-Learning Frontier | Data Dialogs 2016

In the past decade the high-tech industry has been revolutionized by machine learning algorithms applied to everything from self-driving cars to personalized ...

iGETT Concept Module Object Recognition on Aerial Imagery

This Concept Module focuses on techniques for Object Recognition using patterns observed in imagery (aerial, Landsat) using examples from forestry.