AI News, Dealing With Bias in Artificial Intelligence artificial intelligence

Is Artificial Intelligence Racial Bias Being Suppressed?

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies.

Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology.

Some systems can identify guns while others can track each individual’s movements and provide a real-time update regarding their location with a single click.

AI-powered software scanned 45,000 photos of children living in orphanages and foster homes and matched 2,930 kids to photos in the government’s lost child database.

For instance, researchers from MIT’s Media Lab used facial recognition software in an experiment that misidentified dark-skinned females as men up to 35% of the time.

The problem is that risk assessment scores are used by authorities to inform decisions as a person moves through the criminal justice system.

However, an independent, nonprofit news organization called ProPublica studied the scores and found them to be remarkably unreliable in forecasting violent crime.

For two years in a row, Canadian immigration authorities denied visas to approximately two dozen AI academics hoping to attend a major conference on artificial intelligence.

Due to a lack of color contrast, it makes sense that darker skin would make it harder for computer algorithms to identify facial features.

It’s also possible that photos used to train AI systems include more light skinned people and males than dark skinned people and females.

While the issue appears straight forward, there’s one factor that isn’t being accounted for by some of the facial recognition critics: the racial and gender bias seems to exist with facial analysis and not facial recognition.

In response, Amazon disputed the results of MIT’s study, claiming researchers used “facial analysis” and not “facial recognition” to test for bias.

An Amazon spokesperson says it doesn’t make sense to use facial analysis to gauge the accuracy of facial recognition, and that’s a fair claim.

For instance, if a suspect is captured on video but can’t be clearly seen, has no previous arrests, and can’t be matched to a database, facial analysis will be used to obtain the suspect’s identity.

While the benefits to using facial recognition software are clear, it’s time for this technology to be regulated and force developers to improve the accuracy before it’s deployed in high-stake situations.

Addressing Bias in Artificial Intelligence in Health Care

Recent scrutiny of artificial intelligence (AI)–based facial recognition software has renewed concerns about the unintended effects of AI on social bias and inequity.

When applying the Framingham Risk Score to populations with similar clinical characteristics, the predicted risk of a cardiovascular event was 20% lower for black individuals compared with white individuals, indicating that the score may not adequately capture risk factors for some minority groups.2 Social bias in health care refers to inequity in care delivery that systematically leads to suboptimal outcomes for a particular group.

For example, clinicians may incorrectly discount the diagnosis of myocardial infarction in older women because these patients are more likely to present with atypical symptoms.3 An AI algorithm that learns from historical electronic health record (EHR) data and existing practice patterns may not recommend testing for cardiac ischemia for an older woman, delaying potentially lifesaving treatment.

For example, among women with breast cancer, black women had a lower likelihood of being tested for high-risk germline mutations compared with white women, despite carrying a similar risk of such mutations.5 Thus, an AI algorithm that depends on genetic test results is more likely to mischaracterize the risk of breast cancer for black patients than white patients.

However, clinicians may have a propensity to trust suggestions from AI decision support systems, which summarize large numbers of inputs into automated real-time predictions, while inadvertently discounting relevant information from nonautomated systems—so-called automation complacency.6 For example, an AI-based early warning system can interpret changes in continuously monitored vital signs to alert an intensivist of a patient’s impending clinical instability.

When applied to unstructured data from psychiatry notes, AI algorithms demonstrated greater documentation of anxiety and chronic pain topics for white patients and psychosis topics for black, Hispanic, and Asian patients.

Alerting clinicians to these disparities in documentation in real time could improve care of patients by making implicit biases in their practice more salient.7 Second, because most AI bias is related to the data-generating process, the primary solution may be to preferentially use unbiased data sources.

Examples of relatively unbiased, uniform data sources include recorded vital sign data during surgical operations or triage data collected from the first hour after emergency department presentation, “upstream” of clinician judgments.

For instance, existing standards, including the PROBAST tool to assess risk of bias in prediction models, can aid algorithm developers in selecting representative training sets and appropriate predictor variables.8 In addition, algorithm predictions and subsequent actions could be tracked continuously to help ensure that outputs are not reinforcing existing social biases.

Algorithm developers also could use certain sensitivity checks, including creating simulated data sets with high numbers of omitted variables and conducting counterfactual simulations, to determine how robust predictions are to omitted variable bias.

Can we protect AI from our biases? | Robin Hauser | TED Institute

As humans we're inherently biased. Sometimes it's explicit and other times it's unconscious, but as we move forward with technology how do we keep our biases ...

Assessing the Impact of Bias in Artificial Intelligence

Mar.26 -- Microsoft Post Doctoral Researcher Timnit Gebru discusses the effects of bias in artificial intelligence. She speaks with Emily Chang on "Bloomberg ...

Bias in AI is a Problem

We think that machines can be objective because they don't worry about human emotion. Even though that's the case, AI (artificial intelligence) systems may ...

How to keep human bias out of AI | Kriti Sharma

AI algorithms make important decisions about you all the time -- like how much you should pay for car insurance or whether or not you get that job interview.

Biases are being baked into artificial intelligence

When it comes to decision making, it might seem that computers are less biased than humans. But algorithms can be just as biased as the people who create ...

Managing the risks of AI: Gender bias in AI

In this episode, Cathy Cobey and Dr. Cindy Gordon discuss the implications of the lack of diversity in those who program AI and the lack of diversity in the actual ...

Artificial Intelligence: banishing bias

Behavioural science distinguishes itself from traditional economics because human beings don't always act rationally, and can be subject to bias - we're not ...

KPMG 2019 Executive Symposium on AI: Ethical AI - trust, privacy, bias

This frank discussion on ethics related to AI and emerging technology use will make people think. The conversation between Todd Lohr, KPMG, and Max ...

Computing human bias with AI technology

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Computer ...

AI: Training Data & Bias

The most important aspect of Machine Learning is what data is used to train it. Find out how training data affects a machine's predictions and why biased data ...