AI News, The Ethics of Artificial Intelligence in Healthcare and Beyond artificial intelligence

A.I. will equalize health care in China, NovaVision CEO says

Artificial intelligence (A.I.) will level out the quality of health care in China in the coming decades, particularly closing the gap between rural and urban parts of the country, according to a CEO in the medical industry.

Speaking at CNBC's East Tech West conference in the Nansha district of Guangzhou, China, Jim Wang, CEO of health care conglomerate NovaVision Group, which owns brands focused on eyecare, said new advances in A.I.

advances into health care such as algorithm cameras to scan for preventative measures could put less pressure on hospitals in China's major cities.

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[12]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[13]

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[24]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[25]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[30]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[32]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[40]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[33]

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[45]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[47]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

Student group explores the ethical dimensions of artificial intelligence

For years, the tech industry followed a move-fast-and-break-things approach, and few people seemed to mind as a wave of astonishing new tools for communicating and navigating the world appeared on the market.

Now, amid rising concerns about the spread of fake news, the misuse of personal data, and the potential for machine-learning algorithms to discriminate at scale, people are taking stock of what the industry broke.

But sometimes it scares me.”  The founders had debated the promise and perils of AI in class and among friends, but their push to reach a wider audience came in September, at a Google-sponsored fairness in machine learning workshop in Cambridge.

They considered two models: Harvard, which embeds philosophy and moral reasoning into its computer science classes, and Santa Clara University, in Silicon Valley, which offers a case study-based module on ethics within its introductory data science courses.  Reactions in the room were mixed.

Others thought ethics should be integrated at each level of technical training.  “When you learn to code, you learn a design process,” said Natalie Lao, an EECS graduate student helping to develop AI courses for K-12 students.

“If you include ethics into your design practice you learn to internalize ethical programming as part of your work flow.” The students also debated whether stakeholders beyond the end-user should be considered.

In their research at MIT, the founders of the ethics reading group are simultaneously developing tools to address the dilemmas raised in the group.  Gilpin is creating the methodologies and tools to help self-driving cars and other autonomous machines explain themselves.

Identifying sources of bias in the data pipeline, she says, is key to avoiding more serious problems in downstream applications.  Chen, formerly a data scientist and chief of staff at DropBox, develops machine learning tools for health care.

Artificial Intelligence, ethics and the law: What challenges? What opportunities?

Artificial Intelligence (AI) is no longer sci-fi. From driverless cars to the use of machine learning algorithms to improve healthcare services and the financial ...

Artificial Intelligence in Healthcare - The Need for Ethics | Varoon Mathur | TEDxUBC

The advent of artificial intelligence (AI) promises to revolutionize the way we think about medicine and healthcare, but whom do we hold accountable when ...

Can Artificial Intelligence Improve our Healthcare?

The Life and Death of a Certain K. Zabriskie, Patriarch by Chris Zabriskie is licensed under a Creative Commons Attribution license ...

How Virtual Humans Learn Emotion and Social Intelligence

At USC ICT's Virtual Humans lab, we learn how researchers build tools and algorithms that teach AI the complexities of social and emotional cues. We run ...

Program Wrap-up: Frameworks for an Inclusive Future of AI in Healthcare

Program Wrap-up: Frameworks for an Inclusive Future of AI in Healthcare Moderator: Tina Hernandez-Boussard, Associate Professor of Bioinformatics, ...

Breakthrough Theory, AI in Action | AIDC 2018 | Intel AI

Naveen Rao, corporate vice president and general manager of the Intel Artificial Intelligence Products Group, is joined for his keynote address by Amazon, ...

AI and Machine Learning in Medicine with Jonathan Chen

Medicine is ripe for applying AI, given the enormous volumes of real world data and ballooning healthcare costs. Professor Chen demystifies buzzwords, draws ...

What You Need to Know about AI in Healthcare

Tom Lawry, the Director of Worldwide Health for Microsoft joins Pat Salber (@docweighsin) to discuss artificial intelligence - what is it and how it is being used to ...

The Ethics and Governance of Artificial Intelligence - day 1

More information at: License: CC-BY-4.0 (