AI News, Researchers find gender and racial bias in Amazon's facial ... artificial intelligence
Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It
experienced this firsthand, when I was a graduate student at MIT in 2015 and discovered that some facial analysis software couldn’t detect my dark-skinned face until I put on a white mask.
Altering myself to fit the norm—in this case better represented by a white mask than my actual face—led me to realize the impact of the exclusion overhead, a term I coined to describe the cost of systems that don’t take into account the diversity of humanity.
There’s no shortage of headlines highlighting tales of failed machine learning systems that amplify, rather than rectify, sexist hiring practices, racist criminal justice procedures, predatory advertising, and the spread of false information.
Given what we know now, as well as the history of racist police brutality, there needs to be a moratorium on using such technology in law enforcement—including in equipping drones or police body cameras with facial analysis or recognition software for lethal operations.
As more people question how seemingly neutral technology has gone astray, it’s becoming clear just how important it is to have broader representation in the design, development, deployment, and governance of AI.
I found one government dataset of faces collected for testing that contained 75% men and 80% lighter-skinned individuals and less than 5% women of color—echoing the pale male data problem that excludes so much of society in the data that fuels AI.
By working to reduce the exclusion overhead and enabling marginalized communities to engage in the development and governance of AI, we can work toward creating systems that embrace full spectrum inclusion.
Amazon is Right: Thresholds and Legislation Matter, So Does Truth
Today my op-ed on racial and gender bias in facial analysis and recognition technology was published by TIME magazine.
We did this audit months after sending preliminary audit results to the company in June 2018 and receiving no response from the company at the time.
There have been a number of calls for the legislation for facial analysis and recognition technologies as Matt Cagle of ACLU Northern California summarizes in the following tweet: Last year I wrote an op-ed for the New York Times on the Dangers of Facial Analysis Technology that also called for legislation.
Given the repeated demonstrations of bias in facial analysis and recognition tools, the lack of transparency from companies like Amazon, I wholly support a moratorium on the use of the technology.
The pledge prohibits the lethal application of the technology, requires public inspection and scrutiny for accountability, and prohibits police use where there is no legislation (which the company has called for).
- On 22. januar 2021
About Face: One woman’s Quest To Make AI Less Biased | Better | NBC News
Joy Buolamwini's research found artificial intelligence programs used for facial recognition can be biased against darker skin tones and against women.
Amazon defends facial-recognition tool
Amazon has defended its facial-recognition tool, Rekognition, against claims of racial and gender bias, following a study published by the Massachusetts ...
Amazon Shuts Down 'Sexist' A.I for ‘Discriminating Against Women’
A cautionary tale of reality versus the equality agenda. This kind of situation is happening everywhere in the west today. The endless search for equal outcomes ...
Expert claims Amazon's facial recognition system may show racial bias
Amazon's facial recognition tool is being referred to as a 'recipe for authoritarianism and disaster' after it was revealed to be used by law enforcement officials.
AI, Ain't I A Woman? - Joy Buolamwini
Poet of Code shares "AI, Ain't I A Woman " - a spoken word piece that highlights the ways in which artificial intelligence can misinterpret the ..
How Artificial Intelligence Becomes Biased
Artificial intelligence is being built into everything; it will manage our cities, our gadgets, and our jobs. But over the past few years, it's been shown to succumb to ...
Machine Learning Bias and Fairness with Timnit Gebru and Margaret Mitchell: GCPPodcast 114
Original post: This week, we dive ..
Ethics of AI
Detecting people, optimising logistics, providing translations, composing art: artificial intelligence (AI) systems are not only changing what and how we are doing ...