AI News, Ethics of artificial intelligence artificial intelligence

Artificial Intelligence in Medicine Raises Legal and Ethical Concerns

The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.

For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment.

They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.

Potential for Discrimination AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences.

In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources.

AI can draw upon purchasing records, income data, criminal records and even social media for information about an individual’s health.

As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.” This predictive capability of AI raises significant ethical concerns in health care.

They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers and others.

This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions.

When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results.

If the data used to develop an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect.

Yet, to ensure that AI truly promotes patient welfare, physicians, researchers and policymakers must recognize its risks and proceed with caution.

CERN Accelerating science

Such technologies could, for example, play a role in filtering through hundreds of millions of particle collision events each second to select interesting ones for further study.

In particular, Nallur discussed challenges related to the verification and validation of decisions made, the problems surrounding implicit bias, and the difficulties of actually encoding ethical principles.

He believes the best way to achieve this is by using games to represent certain multi-agent situations, thus allowing ethics to emerge through agreement based on socio-evolutionary mechanisms – as in human societies.

To achieve this, we will need intense and fundamental collaboration between computer scientists, domain experts and legal professionals.” Nallur was invited to speak at CERN by CERN openlab, which is running a number of R&D projects related to AI technologies with its industry and research collaborators.

As such, it is important to think about ethical considerations related to AI technologies from a very early stage.” He continues: “I hope that this fascinating talk will serve to ignite further discussion within our community.” Nallur’s talk is available to watch in full

MIT developed a course to teach tweens about the ethics of AI

This summer, Blakeley Payne, a graduate student at MIT, ran a week-long course on ethics in artificial intelligence for 10-14 year olds.

Payne created an open source, middle-school AI ethics curriculum to make kids aware of how AI systems mediate their everyday lives, from YouTube and Amazon’s Alexa to Google search and social media.

But Payne sees middle school as a unique time to start kids understanding the world they live in: it’s around ages 10 to 14 year that kids start to experience higher-level thoughts and deal with complex moral reasoning.

One project exposes kids to a GAN (generative adversarial network, a type of AI system) and then asks them to write a fictional piece about the best benefits it might offer, and what dangers it could pose.

Payne explained that it’s the “What is the Black Mirror episode of this, except they don’t know what Black Mirror is.” Payne piloted the program with 225 students in fifth to eighth grade at the David E Williams school outside Pittsburgh last autumn.

For example, at the beginning of the workshop students were asked: “Who is a stakeholder in YouTube?” The typical student could identify an average of 1.25 stakeholders, with the top three responses being “I don’t know,” “parents,” and “viewers”.

But the real danger of new technology is that it “might be stopping someone from getting a job because of a biased algorithm that we don’t even know is a biased algorithm,” she added.

“Right now it’s a real challenge to find authentic, meaningful, engaging work,” for middle schoolers said Saber Khan, who teaches computer science to middle and high schoolers in Brooklyn, and helped found #ethicalCS on Twitter to gather and share resources after Obama’s White House announced its “Computer Science for All” initiative in 2106.

He said Payne’s curriculum is “one of the rare ones that is classroom-ready.” One thing he particularly likes is that it allows kids to consider ethics in the context of building AI—the peanut butter and jelly sandwich or the cat-dog classifier—rather than passively reading about it and then reflecting on it.

Teachers are meant to cover a range of academic subjects alongside building social and emotional skills, a growth mindset, shooter drills, hygiene, and sex ed, among other things.

“In the same way you have media literacy initiatives and we teach kids how bias can appear in a news article, we should teach them that in this Google search there’s a negotiation going on,” she said.

When we put kids in front of screens but don’t teach them to think critically and ethically, they will feel helpless—that their privacy is lost, or that certain voices are naturally magnified and others muffled.

Philanthropists should treat AI as an ethical not a technological challenge

The list of existential threats to mankind on which wealthy philanthropists have focused their attention —

Even if the machines are not going to kill us, there are plenty of reasons to worry AI will be used for ill as well as for good, and that advances in the field are coming faster than our ability to think through the consequences.

Between facial recognition and autonomous drones, AI’s potential impact on warfare is already obvious, stirring employee concern at Google and other pioneers in the field.

Other fears include whether AI algorithms are reinforcing racial stereotypes, gender biases and other prejudices as a result of a lack of diversity among scientists in the field —

Explaining his gift of $150m to Oxford university, part of which will go to creating an Institute for Ethics in AI, Steve Schwarzman, founder of private equity house Blackstone, told Forbes in June he wanted “to be part of this dialogue, to try and help the system regulate itself so innocent people who’re just living their lives don’t end up disadvantaged.

Last year it switched to a for-profit structure, saying it needed billions of dollars in investment, and this summer it announced it was aligning itself with Microsoft, which is putting in $1bn to help OpenAI pay for computing services from Azure, Microsoft’s cloud.

The ethical dilemma we face on AI and autonomous tech | Christine Fox | TEDxMidAtlantic

The inspiration for Kelly McGillis' character in Top Gun, Christine Fox is the Assistant Director for Policy and Analysis of the Johns Hopkins University Applied ...

The Ethics of Artificial Intelligence | Leah Avakian | TEDxYouth@EnglishCollege

In today's ever-changing and growing world, artificial intelligence is quickly becoming more integrated within our everyday lives. What happens when we give ...

Artificial intelligence and its ethics | DW Documentary

Are we facing a golden digital age or will robots soon run the world? We need to establish ethical standards in dealing with artificial intelligence - and to answer ...


The implications and promises of artificial intelligence (AI) are unimaginable. Already, the now ubiquitous functions of AI have changed our lives ...

The Future of Artificial Intelligence and Ethics on the Road to Superintelligence

The progress of technology over time, the human brain Vs the future, and the future of artificial intelligence. Article: ...

Do you know AI or AI knows you better? Thinking Ethics of AI (original version)

This is an English/French version of the video, with subtitles embedded in the video. A multilingual version where you can activate subtitles in Chinese, English, ...

Nick Bostrom - The Ethics of The Artificial Intelligence Revolution

Link to the panel discussion: Nick Bostrom is a Swedish philosopher at the University of Oxford known for his ..

Ethics of AI

Cindy Rose, CEO of Microsoft UK, discusses the ethics of AI at the Headline Event of London Tech Week, LeadersIn Tech. She shares the viewpoint that with ...

Ethics of AI @ NYU: Artificial Intelligence & Human Values

Day 2 Session 1: Artificial Intelligence & Human Values :00 - David Chalmers Opening Remarks 3:30 - Stuart Russell "Provably Beneficial AI" 37:00 - Eliezer ...

AI ethics and AI risk - Ten challenges

How dangerous could artificial intelligence turn out to be, and how do we develop ethical AI? Risk Bites dives into AI risk and AI ethics, with ten potential risks of ...