AI News, Artificial Intelligence Can Be Biased. Here's What You Should Know ... artificial intelligence

Using AI to Eliminate Bias from Hiring

Like any new technology, artificial intelligence is capable of immensely good or bad outcomes.

Numerous studies have shown this process leads to significant unconscious bias against women, minorities and older workers.

So, recruiters limit their review of the applicant pool to the 10% to 20% they think will show most promise: those coming from Ivy League campuses, passive candidates from competitors of the companies seeking to fill positions, or employee-referral programs.

But if all “successful employees” are white men, due to a history of biased human hiring practices, then it is almost certain that your job-related hiring assessment will bias towards white men and against women and minorities.

A movement among AI practitioners like OpenAI and the Future of Life Institute is already putting forth a set of design principles for making AI ethical and fair (i.e., beneficial to everyone).

AI can assess the entire pipeline of candidates rather than forcing time-constrained humans to implement biased processes to shrink the pipeline from the start.

Only by using a truly automated top-of-funnel process can we eliminate the bias due to shrinking the initial pipeline so the capacity of the manual recruiter can handle it.

The U.S. Equal Employment Opportunity Commission (EEOC) wrote the existing fair-hiring regulations in the 1970s— before the advent of the public internet and the explosion in the number of people applying for each job.

We need to update and clarify these regulations to truly encourage equal opportunity in hiring and allow for the use of algorithmic recruiting systems that meet clear criteria.

The California State Assembly passed a resolution to use unbiased technology to promote diversity in hiring, and the San Francisco DA is using “blind sentencing” AI in criminal justice proceedings.

Can you make AI fairer than a judge? Play our courtroom algorithm game

We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair.

(Don’t worry—this won’t involve looking at code!) The algorithm we’re examining is known as COMPAS, and it’s one of several different “risk assessment” tools used in the US criminal legal system.

It trains on historical defendant data to find correlations between factors like someone’s age and history with the criminal legal system, and whether or not the person was rearrested.

It then uses the correlations to predict the likelihood that a defendant will be arrested for a new crime during the trial-waiting period.1 This prediction is known as the defendant’s “risk score,” and it’s meant as a recommendation: “high risk” defendants should be jailed to prevent them from causing potential harm to society;

(In reality, judges don’t always follow these recommendations, but the risk assessments remain influential.) Proponents of risk assessment tools argue that they make the criminal legal system more fair.

ProPublica found that among defendants who were never rearrested, black defendants were twice as likely as white ones to have been labeled high-risk by COMPAS.2 So our task now is to try to make COMPAS better.

In total, that’s over 7,200 profiles with each person’s name, age, race, and COMPAS risk score, noting whether the person was ultimately rearrested either after being released or jailed pre-trial.

For the purposes of this story, we are going to use COMPAS’s “high risk” threshold, a score of 7 or higher, to represent a recommendation that a defendant be detained.3 From here on out, you are in charge.

So first, let’s imagine the best-case scenario: all the defendants your algorithm labels with a high risk score go on to get rearrested, and all defendants who get a low risk score do not.

(Hint: you want to maximize its accuracy.) You’ll notice that no matter where you place the threshold, it’s never perfect: we always jail some defendants who don’t get rearrested (empty dots to the right of the threshold) and release some defendants who do get rearrested (filled dots to the left of threshold).

Now we will be able to explicitly see whether our threshold favors needlessly keeping people in jail or releasing people who are then rearrested.4 Notice that COMPAS’s default threshold favors the latter.

There’s no universal answer, but in the 1760s, the English judge William Blackstone wrote, “It is better that ten guilty persons escape than that one innocent suffer.” Blackstone’s ratio is still highly influential in the US today.

The second problem is that even if you follow COMPAS’s recommendations consistently, someone—a human—has to first decide where the “high risk” threshold should lie, whether by using Blackstone’s ratio or something else.

Now that we’ve separated black and white defendants, we’ve discovered that even though race isn’t used to calculate the COMPAS risk scores, the scores have different error rates for the two groups.

At the default COMPAS threshold between 7 and 8, 16% of black defendants who don’t get rearrested have been needlessly jailed, while the same is true for only 7% of white defendants.

We’ve picked one, but you can try to find others.) We tried to reach Blackstone’s ratio again, so we arrived at the following solution: white defendants have a threshold between 6 and 7, while black defendants have a threshold between 8 and 9.

Now roughly 9% of both black and white defendants who don’t get rearrested are needlessly jailed, while 75% of those who do are rearrested after spending no time in jail.

In the process of matching the error rates between races, we lost something important: our thresholds for each group are in different places, so our risk scores mean different things for white and black defendants.

In any context where an automated decision-making system must allocate resources or punishments among multiple groups that have different outcomes, different definitions of fairness will inevitably turn out to be mutually exclusive.

Artificial intelligence can be biased against certain people. Here's how.

Whether we know it or not, we use artificial intelligence every day. But the results AI gives us may reinforce our own cultural stereotypes. Stream your PBS ...

What is Artificial Intelligence (or Machine Learning)?

Want to stay current on emerging tech? Check out our free guide today: What is AI? What is machine learning and how does it work? You've ..

Computing human bias with AI technology

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Computer ...

Can we protect AI from our biases? | Robin Hauser | TED Institute

As humans we're inherently biased. Sometimes it's explicit and other times it's unconscious, but as we move forward with technology how do we keep our biases ...

The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo

A robotics researcher afraid of robots, Peter Haas, invites us into his world of understand where the threats of robots and artificial intelligence lie. Before we get ...

Do you know AI or AI knows you better? Thinking Ethics of AI (original version)

This is an English/French version of the video, with subtitles embedded in the video. A multilingual version where you can activate subtitles in Chinese, English, ...

Is AI Racist? - Why Machine Learning Algorithms Are Biased Towards Black People

Artificial intelligence is being used to do many things from diagnosing cancer, stopping the deforestation of endangered rainforests, helping farmers in India with ...

Top 8 Deep Learning Frameworks | Which Deep Learning Framework You Should Learn? | Edureka

AI & Deep Learning with Tensorflow Training: ** ) This Edureka video on "Deep Learning Frameworks" ..

Police Unlock AI's Potential to Monitor, Surveil and Solve Crimes | WSJ

Law enforcement agencies like the New Orleans Police Department are adopting artificial-intelligence based systems to analyze surveillance footage.

Artificial Intelligence and Machine Learning Will NEVER Eliminate Recruiters

So, there's a lot of speculation and many opinions I hear in my space, (recruiting). Where people say that AI and Machine Learning is going to take over the ...