AI News, What Do We Do About the Biases in AI? artificial intelligence

Dealing With Bias in Artificial Intelligence

You could mean bias in the sense of racial bias, gender bias.

Another notion of bias, one that is highly relevant to my work, are cases in which an algorithm is latching onto something that is meaningless and could potentially give you very poor results.

And so, you could actually learn to predict fractures pretty well on the data set that you were given simply by recognizing which hospital did the scan, without actually ever looking at the bone.

So, if your machine-learning algorithm is one that is trained on the data from a given set of hospitals, and you will only use it in those same set of hospitals, then latching onto which hospital did the scan could well be a reasonable approach.

Who Are The Lawyers Who Understand AI Algorithms?

It’s easy to say, “Oh no, that Artificial Intelligence algorithm is racist, sexist, or even ageist.” It’s easy to point the fingers and fire accusations against our machine counterparts.

An article published by the New Scientist identified 5 biases inherent in existing AI Systems can potentially impact people’s lives in a real way.

The most popular scandal is the one exposing COMPAS, an algorithm designed in the US to guide sentencing for predicting the likelihood of criminal reoffending.

When politicians are trying to come up with rules and regulations, what we need are a handful of lawyers and investigators to work with the technologists to understand the entire landscape of innovation.

They don’t need to understand the minute intricacies but they need enough of an understanding to process the main issues and the bigger pictures.

Due to the impact of innovation, in the western world, lawyers have to learn to deal with complexities beyond contracts and words that describe the algorithms.

They have to truly grasp the basic concepts of what the algorithms are trying to accomplish: their original intention, their true intention, and their effects.

Multiple organizations from people who develop the algorithms, to people who develop the software/applications that use the algorithms, to businesses that deploy and use the software for business purposes.

Projects such as Google’s AI for Social Good, and conferences such as AINow Symposium that attempt to bring together technologists, companies, social scientists, governments, lawyers, and regulators are just examples of how the Age of AI need cooperation.

The media and media organizations might do a really good job at identifying issues, raising them to gain awareness, and influence people to think about issues, but it takes the cooperation of larger organizations that can bring together technologists, companies, and governments to attempt to address these deeper issues that arise.

The repercussion of not addressing issues deeply, collectively within industries, with governments, and with cooperation from technologists is that the problems will be amplified unnecessarily when it impacts collective groups of people.

As AI rules and regulations are refined, lawyers will step in to take center stage to help to make sense of these grey areas.

How does artificial intelligence work, and what do people mean when they say 'AI?'

If you spend any modicum of your time on the internet, or even dabbling in the tech world, you've no doubt seen mentions of "AI,"

In the 1950s, when Marvin Minsky built the first neural network simulator alongside John McCarthy, the pair described artificial intelligence as tasks performed by a program or a machine that you couldn't reasonably determine if a human had performed or not instead.

way to write software, so to speak, as well as a way of automating tasks that humans could do in a way that makes it hard to tell if a human or computer performed them.

The bottom line of determining whether or not you're actually dealing with AI in terms of an app you're using or even an online service is, according to Kishore Rajgopal, founder and CEO of intelligent adapting pricing software firm NextOrbit, is "the fact that your decisions and your actions change with time, and it [the AI] learns."

basis, for example, "from automated fraud detection on your credit cards, to air conditioning in large buildings, to airplane scheduling, to the sale prices of gold in your Clash of Clans game."

"Right now it’s chat agents, but that will be voice and then fully generated video soon — it will get harder and harder to tell the real from the virtual, and it may not matter!"

With machines and programs that are literally created to learn, adapt, and continually improve themselves over time, there are obvious concerns that arise.

Imagine a residential loan approval algorithm that’s been trained on 60 years of data in the Deep South — what are the chances it might incorporate some racial bias?

We've already seen similar pitfalls of using AI to complete tasks as simple as a smart camera from Facebook's AR/VR team that was meant to focus on one female person of color telling a story that, instead, focused on her "white, male colleague"

AI and machine learning need the same ability to look inside, before we can trust it to make decisions with human lives on the line, or even just decisions that can be impacted by biased data."

The bottom line is that, even if we manage to create super-intelligent software that can perform at the same capacity as humans do, there's still the very real possibility that machines may end up reaching a state of intelligence that's eerily similar to a human's.

What do we do about the biases in AI?

Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes.

At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

AI can help identify and reduce the impact of human biases, but it can also make the problem worse by baking in and deploying biases at scale in sensitive application areas.

AI systems learn to make decisions based on training data, which can include biased human decisions or reflect historical or social inequities, even if sensitive variables such as gender, race, or sexual orientation are removed.

For example, Joy Buolamwini at MIT working with Timnit Gebru found that facial analysis technologies had higher error rates for minorities and particularly minority women, potentially due to unrepresentative training data.

Business and organizational leaders need to ensure that the AI systems they use improve on human decision-making, and they have a responsibility to encourage progress on research and standards that will reduce bias in AI.

It can also be easier to probe algorithms for bias, potentially revealing human biases that had gone unnoticed or unproven (inscrutable though deep learning models may be, a human brain is the ultimate “black box”).

Researchers have developed technical ways of defining fairness, such as requiring that models have equal predictive value across groups or requiring that models have equal false positive and false negative rates across groups.

Still, even as fairness definitions and metrics evolve, researchers have also made progress on a wide variety of techniques that ensure AI systems can meet them, by processing data beforehand, altering the system’s decisions afterwards, or incorporating fairness definitions into the training process itself.

Silvia Chiappa of DeepMind has even developed a path-specific approach to counterfactual fairness that can handle complicated cases where some paths by which the sensitive traits affect outcomes is considered fair, while other influences are considered unfair.

For example, the model could be used to help ensure that admission to a specific department at a university was unaffected by the applicant’s sex while potentially still allowing the university’s overall admission rate to vary by sex if, say, female students tended to apply to more competitive departments.

These improvements will help, but other challenges require more than technical solutions, including how to determine when a system is fair enough to be released, and in which situations fully automated decision making should be permissible at all.

Can we protect AI from our biases? | Robin Hauser | TED Institute

As humans we're inherently biased. Sometimes it's explicit and other times it's unconscious, but as we move forward with technology how do we keep our biases ...

Computing human bias with AI technology

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Computer ...

Bias in AI is a Problem

We think that machines can be objective because they don't worry about human emotion. Even though that's the case, AI (artificial intelligence) systems may ...

Machine Learning and Human Bias

As researchers and engineers, our goal is to make machine learning technology work for everyone.

Artificial intelligence can be biased against certain people. Here's how.

Whether we know it or not, we use artificial intelligence every day. But the results AI gives us may reinforce our own cultural stereotypes. Stream your PBS ...

Biases are being baked into artificial intelligence

When it comes to decision making, it might seem that computers are less biased than humans. But algorithms can be just as biased as the people who create ...

How to keep human bias out of AI | Kriti Sharma

AI algorithms make important decisions about you all the time -- like how much you should pay for car insurance or whether or not you get that job interview.

How MIT is trying to resolve AI bias

Tonya Hall talks with Dr. Aleksander Madry, associate professor of computer science at MIT, about what is being done to resolve bias and error in computer ...

Bias in Big Data and Artificial Intelligence

More and more, the decisions that rule our lives are made by algorithms. From the news we see in our feeds, to whether or not we qualify for a mortgage, to the ...

Understanding AI and Cognitive Bias

Cognitive biases are the focus of this episode of The AI Minute. For more on Artificial Intelligence: .