AI News, Artificial Intelligence’s White Guy Problem

Artificial Intelligence’s White Guy Problem

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods.

The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.

Artificial Intelligence’s White Guy Problem

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems.

Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for ZIP codes in predominantly black neighborhoods.

The complexity of how search engines show ads to internet users makes it hard to say why this happened — whether the advertisers preferred showing the ads to men, or the outcome was an unintended consequence of the algorithms involved.

Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.

'I think my blackness is interfering': does facial recognition show racial bias?

There was the voice recognition software that struggled to understand women, the crime prediction algorithm that targeted black neighbourhoods and the online ad platform which was more likely to show men highly paid executive jobs.

Despite the need, a vetted methodology in machine learning for preventing this kind of discrimination based on sensitive attributes has been lacking.” The paper was one of several on detecting discrimination by algorithms to be presented at the Neural Information Processing Systems (NIPS) conference in Barcelona this month, indicating a growing recognition of the problem.

Nathan Srebro, a computer scientist at the Toyota Technological Institute at Chicago and co-author, said: “We are trying to enforce that you will not have inappropriate bias in the statistical prediction.” The test is aimed at machine learning programs, which learn to make predictions about the future by crunching through vast quantities of existing data.

“It just looks at the predictions it makes.” Their approach, called Equality of Opportunity in Supervised Learning, works on the basic principle that when an algorithm makes a decision about an individual - be it to show them an online ad or award them parole - the decision should not reveal anything about the individual’s race or gender beyond what might be gleaned from the data itself.

Ghosts in the Machine

In a brightly lit office, Joy Buolamwini sits down at her computer and slips on a Halloween mask to trick the machine into perceiving her as white.

That’s because facial detection algorithms made in the U.S. are frequently trained and evaluated using data sets that contain far more photos of white faces, and they’re generally tested and quality controlled by teams of engineers who aren’t likely to have dark skin.

As a result, some of these algorithms are better at identifying lighter skinned people, which can lead to problems ranging from passport systems that incorrectly read Asians as having their eyes closed, to HP webcams and Microsoft Kinect systems that have a harder time recognizing black faces, to Google Photos and Flickr auto-tagging African-Americans as apes.

A seminal 2012 study of three facial recognition algorithms used in law enforcement agencies found that the algorithms were 5–10% less accurate when reading black faces over white ones and showed similar discrepancies when analyzing faces of women and younger people.

“Just being goofy, I put the white mask on to see what would happen, and lo and behold, it detected the white mask.” Facial analysis bias remains a problem in part because industry benchmarks used to gauge performance often don’t include significant age, gender, or racial diversity.

LFW includes photos that represent a broad spectrum of lighting conditions, poses, background activity, and other metrics, but a 2014 analysis of the data set found that 83% of the photos are of white people and nearly 78% are of men.

There’s “limited evidence” of bias, racial or otherwise, in facial analysis algorithms, in part because there simply haven’t been many studies, says Patrick Grother, a computer scientist specializing in biometrics at the National Institute for Standards and Technology and lead author of the 2010 NIST study.

She is joined by a team of volunteers who support her nonprofit organization, the Algorithmic Justice League, which raises awareness of bias through public art and media projects, promotes transparency and accountability in algorithm design, and recruits volunteers to help test software and create inclusive data training sets.

“But these aren’t necessarily the people who are going to be most affected by the decisions of these automated systems…What we want to do is to be able to build tools for not just researchers, but also the general public to scrutinize AI.” Exposing AI’s biases starts by scrapping the notion that machines are inherently objective, says Cathy O’Neil, a data scientist whose book, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, examines how algorithms impacts everything from credit access to college admissions to job performance reviews.

Both are reasonable and seemingly objective parameters, but if the company has a history of hiring and promoting men over women or white candidates over people of color, an algorithm trained on that data will favor resumes that resemble those of white men.

Risk assessment tools used in commercial lending and insurance, for example, may not ask direct questions about race or class identity, but the proprietary algorithms frequently incorporate other variables like ZIP code that would count against those living in poor communities.

They analyzed more than 2 billion price quotes across approximately 700 companies and found that a person’s financial life dictated their car insurance rate far better than their driving record.

Higher insurance prices for low-income people can translate to higher debt and plummeting credit scores, which can mean reduced job prospects, which allows debt to pile up, credit scores to sink lower, and insurance rates to increase in a vicious cycle.

The algorithm was equally accurate at predicting recidivism rates for black and white defendants, but black defendants who didn’t re-offend were nearly twice as likely to be classified as high-risk compared with similarly reformed white defendants.

“The real issue is that we have, for a long time, been able to avoid being very clear as a society about what we mean by fairness and what we mean by discrimination.” There are laws that could provide some protection against algorithmic bias, but they aren’t comprehensive and have loopholes.

(It’s currently against the law to unintentionally discriminate on the basis of sex, age, disability, race, national origin, religion, pregnancy, or genetic information.) Proving disparate impact is notoriously difficult even when algorithms aren’t involved.

“If we want to have best practices, we should be testing a lot of versions of the software and not just relying on the first one that we’re presented with.” Since algorithms are proprietary and frequently protected under non-disclosure agreements, organizations that use them, including both private companies and government agencies, may not have the legal right to conduct independent testing, Selbst says.

The 12 research teams received contracts under the Explainable AI program aim to help military forces understand the decisions made by autonomous systems on the battlefield and whether that technology should be used in the next mission, says David Gunning, Explainable AI’s program manager.

Operating similarly to the way the National Transportation Safety Board investigates vehicular accidents, Shneiderman’s safety board would be an independent agency that could require designers to assess the impact of their algorithms before deployment, provide continuous monitoring to ensure safety and stability, and conduct retrospective analyses of accidents to inform future safety procedures.

Rise of the racist robots – how AI is learning all our worst impulses

In May last year, a stunning report claimed that a computer program used by a US court for risk assessment was biased against black prisoners.

The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

The accusation gave frightening substance to a worry that has been brewing among activists and computer scientists for years and which the tech giants Google and Microsoft have recently taken steps to investigate: that as our computational tools have become more advanced, they have become more opaque.

(“It’s impossible to know how widely adopted AI is now, but I do know we can’t go back,” one computer scientist says.) It’s impossible to know how widely adopted AI is now, but I do know we can’t go back But, while some of the most prominent voices in the industry are concerned with the far-off future apocalyptic potential of AI, there is less attention paid to the more immediate problem of how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society.

Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods.

For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious” because police can say: “We’re not being biased, we’re just doing what the math tells us.” And the public perception might be that the algorithms are impartial.

Programs developed by companies at the forefront of AI research have resulted in a string of errors that look uncannily like the darker biases of humanity: a Google image recognition program labelled the faces of several black people as gorillas;

This sort of approach has allowed computers to perform tasks – such as language translation, recognising faces or recommending films in your Netflix queue – that just a decade ago would have been considered too complex to automate.

In London, Hackney council has recently been working with a private company to apply AI to data, including government health and debt records, to help predict which families have children at risk of ending up in statutory care.

Lum and her co-author took PredPol – the program that suggests the likely location of future crimes based on recent crime and arrest statistics – and fed it historical drug-crime data from the city of Oakland’s police department.

The program was suggesting majority black neighbourhoods at about twice the rate of white ones, despite the fact that when the statisticians modelled the city’s likely overall drug use, based on national statistics, it was much more evenly distributed.

As if that wasn’t bad enough, the researchers also simulated what would happen if police had acted directly on PredPol’s hotspots every day and increased their arrests accordingly: the program entered a feedback loop, predicting more and more crime in the neighbourhoods that police visited most.

And while most of us don’t understand the complex code within programs such as PredPol, Hamid Khan, an organiser with Stop LAPD Spying Coalition, a community group addressing police surveillance in Los Angeles, says that people do recognise predictive policing as “another top-down approach where policing remains the same: pathologising whole communities”.

The scientific literature on the topic now reflects a debate on the nature of “fairness” itself, and researchers are working on everything from ways to strip “unfair” classifiers from decades of historical data, to modifying algorithms to skirt round any groups protected by existing anti-discrimination laws.

These things are going to eliminate bias from hiring decisions and everything else.’” Meanwhile, computer scientists face an unfamiliar challenge: their work necessarily looks to the future, but in embracing machines that learn, they find themselves tied to our age-old problems of the past.

Code-Dependent: Pros and Cons of the Algorithm Age

While many of the 2016 U.S. presidential election post-mortems noted the revolutionary impact of web-based tools in influencing its outcome, XPrize Foundation CEO Peter Diamandis predicted that “five big tech trends will make this election look tame.” He said advances in quantum computing and the rapid evolution of AI and AI agents embedded in systems and devices in the Internet of Things will lead to hyper-stalking, influencing and shaping of voters, and hyper-personalized ads, and will create new ways to misrepresent reality and perpetuate falsehoods.

Analysts like Aneesh Aneesh of Stanford University foresee algorithms taking over public and private activities in a new era of “algocratic governance” that supplants “bureaucratic hierarchies.” Others, like Harvard’s Shoshana Zuboff, describe the emergence of “surveillance capitalism” that organizes economic behavior in an “information civilization.” To illuminate current attitudes about the potential impacts of algorithms in the next decade, Pew Research Center and Elon University’s Imagining the Internet Center conducted a large-scale canvassing of technology experts, scholars, corporate practitioners and government leaders.

As Brian Christian and Tom Griffiths write in Algorithms to Live By, algorithms provide ‘a better standard against which to compare human cognition itself.’ They are also a goad to consider that same cognition: How are we thinking and what does it mean to think through algorithms to mediate our world?

After all, algorithms are generated by trial and error, by testing, by observing, and coming to certain mathematical formulae regarding choices that have been made again and again – and this can be used for difficult choices and problems, especially when intuitively we cannot readily see an answer or a way to resolve the problem.

Our systems do not have, and we need to build in, what David Gelernter called ‘topsight,’ the ability to not only create technological solutions but also see and explore their consequences before we build business models, companies and markets on their strengths, and especially on their limitations.” Chudakov added that this is especially necessary because in the next decade and beyond, “By expanding collection and analysis of data and the resulting application of this information, a layer of intelligence or thinking manipulation is added to processes and objects that previously did not have that layer.

The result: As information tools and predictive dynamics are more widely adopted, our lives will be increasingly affected by their inherent conclusions and the narratives they spawn.” “The overall impact of ubiquitous algorithms is presently incalculable because the presence of algorithms in everyday processes and transactions is now so great, and is mostly hidden from public view.

The expanding collection and analysis of data and the resulting application of this information can cure diseases, decrease poverty, bring timely solutions to people and places where need is greatest, and dispel millennia of prejudice, ill-founded conclusions, inhumane practice and ignorance of all kinds.

In order to make algorithms more transparent, products and product information circulars might include an outline of algorithmic assumptions, akin to the nutritional sidebar now found on many packaged food products, that would inform users of how algorithms drive intelligence in a given product and a reasonable outline of the implications inherent in those assumptions.” A

number of respondents noted the many ways in which algorithms will help make sense of massive amounts of data, noting that this will spark breakthroughs in science, new conveniences and human capacities in everyday life, and an ever-better capacity to link people to the information that will help them.

However, many people – and arguably many more people – will be able to obtain loans in the future, as banks turn away from using such factors as race, socio-economic background, postal code and the like to assess fit.

Moreover, with more data (and with a more interactive relationship between bank and client) banks can reduce their risk, thus providing more loans, while at the same time providing a range of services individually directed to actually help a person’s financial state.

Health care is a significant and growing expense not because people are becoming less healthy (in fact, society-wide, the opposite is true) but because of the significant overhead required to support increasingly complex systems, including prescriptions, insurance, facilities and more.

New technologies will enable health providers to shift a significant percentage of that load to the individual, who will (with the aid of personal support systems) manage their health better, coordinate and manage their own care, and create less of a burden on the system.

They say this is creating a flawed, logic-driven society and that as the process evolves – that is, as algorithms begin to write the algorithms – humans may get left out of the loop, letting “the robots decide.” Representative of this view: Bart Knijnenburg, assistant professor in human-centered computing at Clemson University, replied, “Algorithms will capitalize on convenience and profit, thereby discriminating [against] certain populations, but also eroding the experience of everyone else.

My biggest fear is that, unless we tune our algorithms for self-actualization, it will be simply too convenient for people to follow the advice of an algorithm (or, too difficult to go beyond such advice), turning these algorithms into self-fulfilling prophecies, and users into zombies who exclusively consume easy-to-consume items.” An anonymous futurist said, “This has been going on since the beginning of the industrial revolution.

When you remove the humanity from a system where people are included, they become victims.” Another anonymous respondent wrote, “We simply can’t capture every data element that represents the vastness of a person and that person’s needs, wants, hopes, desires.

sampling of excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report): Algorithms have the capability to shape individuals’ decisions without them even knowing it, giving those who have control of the algorithms an unfair position of power.

The harms of new technology will be most experienced by those already disadvantaged in society, where advertising algorithms offer bail bondsman ads that assume readers are criminals, loan applications that penalize people for proxies so correlated with race that they effectively penalize people based on race, and similar issues.” Dudley Irish, a software engineer, observed, “All, let me repeat that, all of the training data contains biases.

sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report): One of the greatest challenges of the next era will be balancing protection of intellectual property in algorithms with protecting the subjects of those algorithms from unfair discrimination and social engineering.

Ten years from now, though, the life of someone whose capabilities and perception of the world is augmented by sensors and processed with powerful AI and connected to vast amounts of data is going to be vastly different from that of those who don’t have access to those tools or knowledge of how to utilize them.

number of participants in this canvassing expressed concerns over the change in the public’s information diets, the “atomization of media,” an over-emphasis of the extreme, ugly, weird news, and the favoring of “truthiness” over more-factual material that may be vital to understanding how to be a responsible citizen of the world.

Easier said than done, but if there were ever a time to bring the smartest minds in industry together with the smartest minds in academia to solve this problem, this is the time.” Chris Kutarna, author of Age of Discovery and fellow at the Oxford Martin School, wrote, “Algorithms are an explicit form of heuristic, a way of routinizing certain choices and decisions so that we are not constantly drinking from a fire hydrant of sensory inputs.

sampling of quote excerpts tied to this theme from other respondents (for details, read the fuller versions in the full report): We need some kind of rainbow coalition to come up with rules to avoid allowing inbuilt bias and groupthink to effect the outcomes.

I suspect utopia given that we have survived at least one existential crisis (nuclear) in the past and that our track record toward peace, although slow, is solid.” Following is a brief collection of comments by several of the many top analysts who participated in this canvassing: Vinton Cerf, Internet Hall of Fame member and vice president and chief internet evangelist at Google: “Algorithms are mostly intended to steer people to useful information and I see this as a net positive.” Cory Doctorow, writer, computer science activist-in-residence at MIT Media Lab and co-owner of Boing Boing, responded, “The choices in this question are too limited.

If, on the other hand, the practice continues as is, it terminates with a kind of Kafkaesque nightmare where we do things ‘because the computer says so’ and we call them fair ‘because the computer says so.’” Jonathan Grudin, principal researcher at Microsoft, said, “We are finally reaching a state of symbiosis or partnership with technology.

I’m less worried about bad actors prevailing than I am about unintended and unnoticed negative consequences sneaking up on us.” Doc Searls, journalist, speaker and director of Project VRM at Harvard University’s Berkman Center, wrote, “The biggest issue with algorithms today is the black-box nature of some of the largest and most consequential ones.

They will get smaller and more numerous, as more responsibility over individual lives moves away from faceless systems more interested in surveillance and advertising than actual service.” Marc Rotenberg, executive director of the Electronic Privacy Information Center, observed, “The core problem with algorithmic-based decision-making is the lack of accountability.

Compare this with China’s social obedience score for internet users.” David Clark, Internet Hall of Fame member and senior research scientist at MIT, replied, “I see the positive outcomes outweighing the negative, but the issue will be that certain people will suffer negative consequences, perhaps very serious, and society will have to decide how to deal with these outcomes.

Even if they are fearful of the consequences, people will accept that they must live with the outcomes of these algorithms, even though they are fearful of the risks.” Baratunde Thurston, Director’s Fellow at MIT Media Lab, Fast Company columnist, and former digital director of The Onion, wrote: “Main positive changes: 1) The excuse of not knowing things will be reduced greatly as information becomes even more connected and complete.

We’ll need both industry reform within the technology companies creating these systems and far more savvy regulatory regimes to handle the complex challenges that arise.” John Markoff, author of Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots and senior writer at The New York Times, observed, “I am most concerned about the lack of algorithmic transparency.

Because of unhealthy power dynamics in our society, I sadly suspect that the outcomes will be far more problematic – mechanisms to limit people’s opportunities, segment and segregate people into unequal buckets, and leverage surveillance to force people into more oppressive situations.

An honest, verifiable cost-benefit analysis, measuring improved efficiency or better outcomes against the loss of privacy or inadvertent discrimination, would avoid the ‘trust us, it will be wonderful and it’s AI!’ decision-making.” Robert Atkinson, president of the Information Technology and Innovation Foundation, said, “Like virtually all past technologies, algorithms will create value and cut costs, far in excess of any costs.

There are too many examples to cite, but I’ll list a few: would-be borrowers turned away from banks, individuals with black-identifying names seeing themselves in advertisements for criminal background searches, people being denied insurance and health care.

Universities must diversify their faculties, to ensure that students see themselves reflected in their teachers.” Jamais Cascio, distinguished fellow at the Institute for the Future, observed, “The impact of algorithms in the early transition era will be overall negative, as we (humans, human society and economy) attempt to learn how to integrate these technologies.

By the time the transition takes hold – probably a good 20 years, maybe a bit less – many of those problems will be overcome, and the ancillary adaptations (e.g., potential rise of universal basic income) will start to have an overall benefit.

In other words, shorter term (this decade) negative, longer term (next decade) positive.” Mike Liebhold, senior researcher and distinguished fellow at the Institute for the Future, commented, “The future effects of algorithms in our lives will shift over time as we master new competencies.

At an absolute minimum, we need to learn to form effective questions and tasks for machines, how to interpret responses and how to simply detect and repair a machine mistake.” Ben Shneiderman, professor of computer science at the University of Maryland, wrote, “When well-designed, algorithms amplify human abilities, but they must be comprehensible, predictable and controllable.

Algorithmic Bias: From Discrimination Discovery to Fairness-Aware Data Mining (Part 2)

Authors: Carlos Castillo, EURECAT, Technology Centre of Catalonia Francesco Bonchi, ISI Foundation Abstract: Algorithms and decision making based on Big ...

2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...

Machine Learning: Google's Vision - Google I/O 2016

Google has deployed practical A.I. throughout its products for the last decade -- from Translate, to the Google app, to Photos, to Inbox. The teams continue to ...

The Ethics and Governance of AI opening event, February 3, 2018

Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...

AI in Public Sector: Tool for inclusion or exclusion?

March 1, 2018 Panel Event AI in the Public Sector: Tool for inclusion or exclusion? This panel was organized by the Taskar Center for Accessible Technology as ...

re:publica 2016 - Kate Crawford: Know your terrorist credit score!

Find out more at: How are our lives being changed by the rise of machine learning, big data and ..

Jacob Silverman: "Terms of Service: Social Media and the Price of Constant Connection"

Jacob Silverman's recently-released book TERMS OF SERVICE is a call for social media users to take back ownership of their digital lives. Integrating politics ...

ZEITGEIST: MOVING FORWARD | OFFICIAL RELEASE | 2011

Please support Peter Joseph's new, upcoming film project: "InterReflections" by joining the mailing list and helping: LIKE ..

Paradise or Oblivion

Paradise or Oblivion This video presentation advocates a new .