AI News, Microsoft Research Blog

Microsoft Research Blog

Over the past decade, machine learning systems have begun to play a key role in many high-stakes decisions: Who is interviewed for a job?

However, news stories and numerous research studies have found that machine learning systems can inadvertently discriminate against minorities, historically disadvantaged populations and other groups.

In essence, this is because machine learning systems are trained to replicate decisions present in the data with which they are trained and these decisions reflect society’s historical biases.

Our work, outlined in a paper titled, “A Reductions Approach to Fair Classification,” presented this month at the 35th International Conference on Machine Learning (ICML 2018) in Stockholm, Sweden, focuses on some of these challenges, providing a provably and empirically sound method for turning any common classifier into a “fair” classifier according to any of a wide range of fairness definitions.

To understand our method, consider the process of choosing applicants to interview for a job where it is desirable to have an interview pool that is balanced with respect to gender and race—a fairness definition known as demographic parity.

Our method can turn a classifier that predicts who should be interviewed based on previous (potentially biased) hiring decisions into a classifier that predicts who should be interviewed while also respecting demographic parity (or another fairness definition).

For instance, applicants of a certain gender or race might be upweighted or downweighted, so that the classification algorithm is better able to find a classification rule that is fair with respect to the desired gender or racial proportions.

Lecture 16 | Adversarial Examples and Adversarial Training

In Lecture 16, guest lecturer Ian Goodfellow discusses adversarial examples in deep learning. We discuss why deep networks and other machine learning ...

Fairness in Machine Learning

Machine learning is increasingly being adopted by various domains: governments, credit, recruiting, advertising, and many others. Fairness and equality are ...

Carlos Guestrin Interview - Explaining the Predictions of Machine Learning Models

My guest this time is Carlos Guestrin, the Amazon professor of Machine Learning at the University of Washington. Carlos and I recorded this podcast at a ...

The Emerging Theory of Algorithmic Fairness

As algorithms reach ever more deeply into our daily lives, increasing concern that they be “fair” has resulted in an explosion of research in the theory and ...

NW-NLP 2018: Semantic Matching Against a Corpus

The fifth Pacific Northwest Regional Natural Language Processing Workshop will be held on Friday April 27, 2018, in Redmond, WA. We accepted abstracts and ...

AI in the Administrative State | Introduction & Overview

Introductory Remarks Stuart Benjamin, Duke Law School, The Center for Innovation Policy at Duke Law Nita Farahany, Duke Law School, Duke Initiative for ...

TensorFlow Dev Summit 2018 - Livestream

TensorFlow Dev Summit 2018 All Sessions playlist → Live from Mountain View, CA! Join the TensorFlow team as they host the second ..

Computer Science Colloquium - October 19, 2017 - William Wang

Description: 50-MINUTE TALK AT NOON THURSDAYS.

OURSA Conference Recording

OUR SECURITY ADVOCATES highlights a diverse set of experts from across information security, safety, trust, and other related fields. Agenda is below and at ...