AI News, Automating artificial intelligence for medical decision making artificial intelligence

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

The term 'robot ethics' (sometimes 'roboethics') refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1]

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

The new recommendations focus on four main areas: humans and society at large, the private sector, the public sector, and research and academia.

In a highly influential branch of AI known as 'natural language processing,' problems can arise from the 'text corpus'—the source material the algorithm uses to learn about the relationships between different words.[33]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[55]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[56]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[61]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[65]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[73]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[66]

Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.[76]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[77] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[79]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[81]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

Q3b Should medical students fear artificial intelligence

MD vs. Machine: Artificial intelligence in health care

Recent advances in artificial intelligence and machine learning are changing the way doctors practice medicine. Can medical data actually improve health care?

Will Artificial Intelligence Replace Radiologists? - The Medical Futurist

Artificial intelligence is everywhere. Let's look at radiology. The rapid development of artificial narrow intelligence mostly in understanding images, text, and ...

How doctors can help A.I. to revolutionize medicine

Greg Corrado, Co-founder of Google Brain and Principal Scientist at Google, has spent the last decade at the forefront of the A.I. revolution. Greg's vision for how ...

Science for Dialysis2: Artificial Intelligence and Machine Learning

SESSION DESCRIPTION Renal transplantation is the treatment of choice for Chronic Kidney Disease (CKD) patients, but the shortage of kidneys and disabling ...

Artificial Intelligence and Machine Learning – Hype vs Reality

Rotem Guttman and April Galyardt describe how machine learning (ML) fits into the bigger picture of artificial intelligence (AI) and discuss the current state of AI.

Artificial Intelligence In Healthcare | Examples Of AI In Healthcare | Edureka

Machine Learning Engineer Masters Program: * Artificial Intelligence in ..

Philips and PathAI to improve breast cancer diagnosis with artificial intelligence

Philips and PathAI team up to improve breast cancer diagnosis using artificial intelligence technology in 'big data' pathology research. Royal Philips, a global ...

Live from RSNA 2017: AI for radiology can be so much more than automated image interpretation

Eran Rubens, Chief Technology Officer of Radiology Informatics for Philips shares how AI and machine learning for radiology can extend far beyond automated ...

Automating AI in the Data-Driven Enterprise

Want to learn more about automating AI, and making it actionable and accessible to all your data workers? If so, this webinar is for you. Viewers will learn: - Why ...