AI News, Fix bias in artificial intelligence artificial intelligence

'Bias deep inside the code': the problem with AI 'ethics' in Silicon Valley

When Stanford announced a new artificial intelligence institute, the university said the “designers of AI must be broadly representative of humanity” and unveiled 120 faculty and tech leaders partnering on the initiative.

The result is what some see as a systemic failure to take AI ethics concerns seriously, despite widespread evidence that algorithms, facial recognition, machine learning and other automated systems replicate and amplify biases and discriminatory practices.

For people directly harmed by the fast-moving and largely unregulated deployment of AI in the criminal justice system, education, the financial sector, government surveillance, transportation and other realms of society, the consequences can be dire.

But they rarely work with people who are affected by the tech, said Laura Montoya, the cofounder and president of the Latinx in AI Coalition: “It’s one thing to really observe bias and recognize it, but it’s a completely different thing to really understand it from a personal perspective and to have experienced it yourself throughout your life.” It’s not hard to find AI ethics groups that replicate power structures and inequality in society – and altogether exclude marginalized groups.

“This type of oversight makes me worried that their stated commitment to the other important values and goals – like taking seriously creating AI to serve the ‘collective needs of humanity’ – is also empty PR spin and this will be nothing more than a vanity project for those attached to it,” she wrote in an email.

“How would you feel if you’re one of the handful of black folks who are called now?” Rediet Abebe, a computer science researcher and the cofounder of Black in AI, said it was encouraging that many in the field spoke out about Stanford: “It has been gratifying to see how quickly many caught this, called it out and are looking to work with folks at Stanford to fix it.

They shouldn’t be allowed to self-regulate.” Sanchez said the decision to partner with James was “not even a dog whistle – that’s a bullhorn”, adding that there was no such thing as “neutral” AI: “The idea that you can do AI or technical ethics without a point of view is silly … The bias is deep inside the code.

Bias in AI: Creating Artificial Intelligence that is Less like Humans

Recently, a friend informed me that he was still working on an artificial intelligence (AI) model for an automated stock trading platform, a project I recalled him saying would be completed a few months prior.

Consequently, I began thinking of the broader problem of how the type of machine learning (ML) model and the training data influence the accuracy of the ML model – not a big surprise here.

Incorrect output is bad enough when you are trying to beat the stock market, but as we come to unquestioningly rely on AI models with little or no human intervention, it could have far-reaching effects in other areas.

Since ML will take over many routine tasks going forward, much is at stake here: from inconveniences caused by wrongly denied loan applications, to fatal accidents by autonomous driving vehicles failing to recognize a human with different skin tone.

Recognizing bias in decision making Here is another example of the undesirable effects of failing to recognize bias in our own decision making – one I’m fond of because it has the elements of a thriller, with the “usual suspects,” and even a twist at the end… Approximately 10 years ago, a major city decided to collect data from drivers with smartphones every time they hit a pothole.

Going back to my friend’s dilemma with his misbehaving ML model… When he used the term overfitted, it immediately brought back memories of statistics classes in college, and of how this problem is just as relevant today as it was to the good old days of curve fitting and regression analysis.

A problem occurs when the ML model is underfitting the data, has too little complexity (high bias, low variance) and is making too many generalizations about the input data set.

The opposite problem of overfitting occurs when the ML model is too complex (low bias, high variance) and is using the signals and noise in the data to create a highly tuned model.

noteworthy example of the challenges faced by AI designers is exemplified in the lessons learned with Microsoft’s short-lived AI chatbot Tay that very quickly picked up on human prejudices online and had to be shut down in a matter of hours.

Stanford’s new AI institute is inadvertently showcasing one of tech’s biggest problems

The artificial intelligence industry is often criticized for failing to think through the social repercussions of its technology—think instituting gender and racial bias in everything facial-recognition software to hiring algorithms.

The Institute for Human-Centered Artificial Intelligence (HAI), which plans to raise $1 billion from donors to fund its initiatives, aims to give voice to professionals from fields ranging from the humanities and the arts to education, business, engineering, and medicine, allowing them to weigh in on the future of AI. “Now is our opportunity to shape that future by putting humanists and social scientists alongside people who are developing artificial intelligence,” Stanford president Marc Tessier-Lavigne declared in a press release.

AI’s “sea of dudes” or”white guy problem” has been well-documented, and awareness of the topic is becoming more and more mainstream.  Diversity and inclusion has become boilerplate language for any major industry event, including Stanford’s own literature on the launch of HAI, and the institute was quick to acknowledge the problems with its faculty makeup.

“We know we still have a long way to go to reach everyone who can contribute to HAI’s mission and it is our top priority,” a Stanford HAI spokesperson said in a statement to Quartz, noting that the institute will be hiring 20 more faculty members soon. ”We know this will be challenging based on the statistics and existing systemic issues, and we know it is critical to the long-term success of HAI and indeed, AI itself.

Statistics released earlier this year by the AI Index report detail an industry where fewer than 20% of AI professors are female. (A Stanford spokesperson points out that the HAI leadership team is 30% female, and co-founded by a woman.) A tour through the faculty pages of other top AI universities like Carnegie Mellon, University of Illinois Champaign-Urbana, and MIT CSAIL illustrates how few people of color or women there are in roles of academic power.

Artificial Intelligence Is Biased. She’s Working to Fix It.

AI technology is booming and it is expected to get even bigger. It's being used for everything from finding friends online and unlocking your phone to employers ...

Can we protect AI from our biases? | Robin Hauser | TED Institute

As humans we're inherently biased. Sometimes it's explicit and other times it's unconscious, but as we move forward with technology how do we keep our biases ...

Computing human bias with AI technology

Humans are biased, and our machines are learning from us — ergo our artificial intelligence and computer programming algorithms are biased too. Computer ...

Assessing the Impact of Bias in Artificial Intelligence

Mar.26 -- Microsoft Post Doctoral Researcher Timnit Gebru discusses the effects of bias in artificial intelligence. She speaks with Emily Chang on "Bloomberg ...

Biases are being baked into artificial intelligence

When it comes to decision making, it might seem that computers are less biased than humans. But algorithms can be just as biased as the people who create ...

Algorithmic biases in AI and machine learning - GitHub Universe 2017

Presented by Terri Burns, Twitter. Last year, NPR did a story answering the question, can computers be racist? (Yes.) Not soon after, Microsoft launched an AI ...

Machine Learning and Human Bias

As researchers and engineers, our goal is to make machine learning technology work for everyone.

ARTIFICIAL INTELLIGENCE IS BIASED. SHE’S WORKING TO FIX IT.

Tech companies, lawmakers, and activists say bias is baked into facial recognition. So why is the government expanding its use of facial recognition? Soledad ...

Toon Tech Talks - AI and Bias

Our Toon Tech Talk of 27 September 2018 'The impact of bias in the society and how to fix it with AI' Our speakers: Marion Mulder (MuldiMedia) helps ...

How Artificial Intelligence Becomes Biased

Artificial intelligence is being built into everything; it will manage our cities, our gadgets, and our jobs. But over the past few years, it's been shown to succumb to ...