AI News, A Guide to Solving Social Problems with Machine Learning

A Guide to Solving Social Problems with Machine Learning

Zoolander 2?”) The Netflix recommendation algorithm predicts what movie you’d like by mining data on millions of previous movie-watchers using sophisticated machine learning tools.

and your social service system will implement a reactive rather than preventive approach to homelessness because they don’t believe it’s possible to forecast which families will wind up on the streets.

This mix of enthusiasm and trepidation over the potential social impact of machine learning is not unique to local government or even to government: non-profits and social entrepreneurs share it as well.

We have learned that some of the most important challenges fall within the cracks between the discipline that builds algorithms (computer science) and the disciplines that typically work on solving policy problems (such as economics and statistics).

Our estimates show that if we made pre-trial release decisions using our algorithm’s predictions of risk instead of relying on judge intuition, we could reduce crimes committed by released defendants by up to 25% without having to jail any additional people.

Compared to investing millions (or billions) of dollars into more social programs or police, the cost of statistically analyzing administrative datasets that already exist is next-to-nothing.

The part that’s much more difficult, and the reason we struggled with our own bail project for several years, is accurately evaluating the potential impact of any new algorithm on policy outcomes.

We hope the rest of this article, which draws on our own experience applying machine learning to policy problems, will help you better evaluate these sales pitches and make you a critical buyer as well.

Look for policy problems that hinge on prediction Our bail experience suggests that thoughtful application of machine learning to policy can create very large gains.

Netflix mines data on large numbers of users to try to figure out which people have prior viewing histories that are similar to yours, and then it recommends to you movies that these people have liked.

For our application to pre-trial bail decisions, the algorithm tries to find past defendants who are like the one currently in court, and then uses the crime rates of these similar defendants as the basis for its prediction.

Decades of behavioral economics and social psychology teach us that people will have trouble making accurate predictions about this risk – because it requires things we’re not always good at, like thinking probabilistically, making attributions, and drawing inferences.

For example we might reasonably be uncomfortable denying welfare to someone who was eligible at the time they applied just because we predict they have a high likelihood to fail to abide by the program’s job-search requirements or fail a drug test in the future.

Make sure you’re comfortable with the outcome you’re predicting Algorithms are most helpful when applied to problems where there is not only a large history of past cases to learn from but also a clear outcome that can be measured, since measuring the outcome concretely is a necessary prerequisite to predicting.

In bail, it turns out that different forms of crime are correlated enough so that an algorithm trained on just one type of crime winds up out-predicting judges on almost every measure of criminality we could construct, including violent crime.

The lesson here is that if the ultimate outcome you care about is hard to measure, or involves a hard-to-define combination of outcomes, then the problem is probably not a good fit for machine learning.

We intentionally focused our work on bail rather than sentencing because it represents a point in the criminal justice system where the law explicitly asks narrowly for a prediction.

Even if there is a measurable single outcome, you’ll want to think about the other important factors that aren’t encapsulated in that outcome – like we did with race in the case of bail – and work with your data scientists to create a plan to test your algorithm for potential bias along those dimensions.

In the bail application this means our algorithm can only use data on those defendants who were released by the judges, because we only have a label providing the correct answer to whether the defendant commits a crime or not for defendants judges chose to release.

This makes it hard to evaluate whether any new machine learning tool can actually improve outcomes relative to the existing decision-making system —in this case, judges.

If some new machine learning-based release rule wants to release someone the judges jailed, we can’t observe their “label”, so how do we know what would happen if we actually released them?

To take a simplified, extreme example, suppose the judge is particularly accurate in using this extra information and can apply it to perfectly predict whether young defendants re-offend or not.

Then we could directly compare whether bail decisions made using machine learning lead to better outcomes than those made on comparable cases using the current system of judicial decision-making.

If we took the caseload of an 80% release rate judge and used our algorithm to pick an additional 10% of defendants to jail, would we be able to achieve a lower crime rate than what the 70% release rate judge gets?

That “human versus machine” comparison doesn’t get tripped up by missing labels for defendants the judges jailed but the algorithm wants to release, because we are only asking the algorithm to recommend additional detentions (not releases).

It can be misguided, and sometimes outright harmful, to adopt and scale up new predictive tools when they’ve only been evaluated on cases from historical data with labels, rather than evaluated based on their effect on the key policy decision of interest.

Remember there’s still a lot we don’t know While machine learning is now widely used in commercial applications, using these tools to solve policy problems is relatively new.

Instead, she wants these warnings to help people skip ahead a few steps and follow a safer path: to focus on inventions that make cars less dangerous, to build cities that allow for easy public transport, and to focus on low emissions vehicles.

Human Decisions and Machine Predictions

for instance, judges may care about racial inequities or about specific crimes (such as violent crimes) rather than just overall crime risk.

Even accounting for these concerns, our results suggest potentially large welfare gains: a policy simulation shows crime can be reduced by up to 24.8% with no change in jailing rates, or jail populations can be reduced by 42.0% with no increase in crime rates.

In addition, by focusing the algorithm on predicting judges’ decisions, rather than defendant behavior, we gain some insight into decision-making: a key problem appears to be that judges to respond to ‘noise’ as if it were signal.

These results suggest that while machine learning can be valuable, realizing this value requires integrating these tools into an economic framework: being clear about the link between predictions and decisions;

Man vs Machine Learning: Criminal Justice in the 21st Century | Jens Ludwig | TEDxPennsylvaniaAvenue

At any point in time America has over 700000 people in jail, drawn disproportionately from low-income and minority groups. We require judges to make ...

Jens Ludwig: "Machine Learning in the Criminal Justice System" | Talks at Google

Jens Ludwig, Director of the University of Chicago Crime Lab, talks about applying machine learning to reducing crime in Chicago and other public policy areas.

Machine intelligence makes human morals more important | Zeynep Tufekci

Machine intelligence is here, and we're already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand ...

Ethical Algorithms Panel - Aaron Swartz Day 2017

Panelists: 1) Chelsea Manning 2) Kristian Lum 3) Caroline Sinders Chelsea Manning, is a network security expert and former U.S. Army intelligence analyst.

Sara Wachter-Boettcher: "Technically Wrong: Sexist Apps, Biased Algorithms [...]" | Talks at Google

A revealing look at how tech industry bias and blind spots get baked into digital products—and harm us all. Many of the services we rely on are full of oversights, ...

Cathy O'Neil: "Weapons of Math Destruction" | Talks at Google

Cathy O'Neil is a data scientist and author of the blog mathbabe.org. She earned a Ph.D. in mathematics from Harvard and taught at Barnard College before ...

Dr. Lisa Feldman Barrett: "Can Machines Perceive Emotion?" | Talks at Google

Many tech companies are trying to build machines that detect people's emotions, using techniques from artificial intelligence. Some companies claim to have ...

Why we need to imagine different futures | Anab Jain

Anab Jain brings the future to life, creating experiences where people can touch, see and feel the potential of the world we're creating. Do we want a world ...

Inherent Trade-Offs in Algorithmic Fairness

Recent discussion in the public sphere about classification by algorithms has involved tension between competing notions of what it means for such a ...

MLTalks: Jill Lepore in conversation with Andrew Lippman

How Data Killed Facts: Essayist and Harvard professor Jill Lepore in conversation with Andy Lippman. A wide-ranging discussion about facts, truth, media, and ...