AI News, First Steps Towards an Ethics of Robots and Artificial Intelligence ... artificial intelligence

The Pentagon’s First AI Strategy Will Focus on Near-Term Operations — andSafety

The Defense Department will unveil a new artificial intelligence strategy perhaps as early as this week, senior defense officials told Defense One.

One official described Maven as “a pathfinder” but cautioned, “the strategy is much broader than one project.” While the military has long played a key role in AI development by helping to fund the design of programs such as  Apple’s Siri, the Department is preparing for its future by trying to learn from private industry — particularly “the small number of companies that do this well,” the official said.

Indeed, one reason the new strategy will focus on more immediate, operational applications of AI — as opposed to more theoretical applications in the far future — is to force operators and commanders to think through safety and ethical implications as they figure out what they want  AI to do for them.

Safety “is a major focus and the language [of the strategy] is very clear about how important we consider these topics.” The Pentagon’s AI ambitions suffered a setback—or at least the appearance of one—last spring when hundreds of Google employees petitioned the company to end work on Project Maven.

AMA Journal of Ethics®

Mr K is a 54-year-old man referred to Dr L’s outpatient spine neurosurgery clinic because he has a 6-week history of left-sided lower back pain, left leg weakness, and shooting pain.

AI can refer to a range of techniques including expert systems, neural networks, machine learning, and deep learning.4 Medical ethics has begun to highlight concerns about uses of AI and robotics in health care, including algorithmic bias, the opacity and lack of intelligibility of AI systems, patient-clinician relationships, potential dehumanization of health care, and erosion of physician skill.5,6 In response, members of the medical community and others have called for changes to ethical guidelines and policy and for additional training requirements for AI devices.6 Given the potential of AI to augment human medical care, the proper role of health care professionals vis-à-vis their digital counterparts is particularly relevant.

The black-box problem emerges for at least a subset of AI systems, including neural networks, which are trained on massive data sets to produce multiple layers of input-output connections.9 The result can be a system largely unintelligible to humans beyond its most basic inputs and outputs.10 In other words, those interacting with an AI system might not understand to any appreciable degree how it works (ie, its functioning seems like a black box).

This challenge pertains not only to neural networks but also to any informationally or technically complex system that may be opaque to those who interact with it, such as Mazor’s advanced and proprietary image recognition algorithms.3 The opacity of an AI system can make it difficult for health care professionals to ascertain how the system arrived at a decision and how an error might occur.

Remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes.6 Moreover, professional societies are recommending that AI systems be “transparent.”11 Assuming Dr L is well informed about the Renaissance Guidance System, she should seek to explain to Mr K the core technologies used, such as the basic nature of the image recognition algorithm.

Computing experts offer wide-ranging visions of where AI is going, from utopian views in which humanity’s problems are largely solved to dystopian scenarios of human extinction.12 These visions can influence whether patients, such as Mr K in the case, and physicians embrace AI (perhaps too quickly) or fear it (even though it might improve health outcomes).

For example, a 2016 survey of 12 000 people across 12 European, Middle-Eastern, and African countries found that only 47% of respondents would be willing to have a “robot perform a minor, non-invasive surgery instead of a doctor,” with that number dropping to 37% for major, invasive surgeries.12,13 These findings indicate that a sizeable proportion of the public has uneasiness about medical AI.

Determining who is morally responsible and perhaps legally liable for a medical error involving use of a sophisticated technology is often complicated by the “problem of many hands.”14 This problem refers to the challenge of attributing moral responsibility when the cause of a harm is distributed among multiple persons—and perhaps organizations—in a way that obfuscates blame attribution.

state, individuals might use a many hands argument in an attempt “to evade personal responsibility for wrongdoing.”15 Given that many parties are involved in the design, sale, procurement, and use of AI systems in health care, identifying the primary locus of responsibility for a medical error can be difficult.16 Moreover, the opacity of some AI systems compounds this challenge in new ways.

Artificial intelligence Humanoid robots knocking at the door

International research networks are working on robots that move, behave and even think more and more like humans.

Together, they are investigating the human brain in greater detail than ever before – starting at the molecular level all the way up to the connections that enable complex cognitive processes.

To date, there are no international standards for neurological research and each organization structures their findings differently, making is hard to capture the current state of research in one unified model.

Roboy’s story began at the University of Zurich in 2013, where computer scientists, engineers and mechatronics engineers joined forces to develop a robot that could hold its own with human beings, at least on the technical playing field.

Researchers from all over the Europe and even from far-away continents are on board, such as the University of Melbourne researchers who used their knowledge of muscle control to make a decisive contribution to the software that controls Roboy’s motor skills.

Photo: © Roboy 2.0 – roboy.org The robot has since moved to the Technical University of Munich, where every semester a new team of students from various disciplines works to teach Roboy something new.

From the mechanics of its feet to the circuit diagram that allows Roboy’s eyes to see, the robot’s complete anatomy is explained on the open source portal and is available for download, so that any layperson, researcher or tech geek can add their own two cents to move the project forward.

And as even coming close to simulating the speed and storage capacity of the human brain consumes a tremendous amount of energy, strides will have to be made in energy-efficient computing power.

Together, they are investigating the human brain in greater detail than ever before – starting at the molecular level all the way up to the connections that enable complex cognitive processes.

To date, there are no international standards for neurological research and each organization structures their findings differently, making is hard to capture the current state of research in one unified model.

Roboy’s story began at the University of Zurich in 2013, where computer scientists, engineers and mechatronics engineers joined forces to develop a robot that could hold its own with human beings, at least on the technical playing field.

Researchers from all over the Europe and even from far-away continents are on board, such as the University of Melbourne researchers who used their knowledge of muscle control to make a decisive contribution to the software that controls Roboy’s motor skills.

From the mechanics of its feet to the circuit diagram that allows Roboy’s eyes to see, the robot’s complete anatomy is explained on the open source portal and is available for download, so that any layperson, researcher or tech geek can add their own two cents to move the project forward.

And as even coming close to simulating the speed and storage capacity of the human brain consumes a tremendous amount of energy, strides will have to be made in energy-efficient computing power.

When is it OK for AI to lie?

Artificial intelligence has captured the public imagination by besting chess grandmasters and one-upping game show contestants.

So, the question then, is, under what conditions are people willing to let AI systems tell them white lies?” At this week’s conference on Artificial Intelligence, Ethics and Society, Kambhampati, along with his former graduate student Tathagata Chakraborti, presented different scenarios exploring when — and if — it would be permissible for AI to lie, and how to keep humans in the loop if they do.

Since we are making these AI systems, we can control when they can and cannot fabricate, or essentially tell lies.” The researchers designed a thought experiment to explore both human-human and human-AI interactions in an urban search-and-rescue scenario: searching all locations on a floor of an earthquake-damaged building.

After all, the Hippocratic Decorum states – “Perform your medical duties calmly and adroitly, concealing most things from the patient while you are attending to him.” Medical lies are usually done to give as much truth as is good for the patient, especially in the delivery of bad news.

In addition to the ASU talk, the AIES conference provided a platform for research and discussions from the perspectives of several disciplines to address the challenges of AI ethics within a societal context, featuring participation from experts in computing, ethics, philosophy, economics, psychology, law and politics.

“The AIES conference was designed to include participation from different disciplines and corners of society, in order to offer a unique and informative look at where we stand with the development and the use of artificial intelligence.” The AIES conference was chaired by a multidisciplinary program committee to ensure a diversity of topics.

AIES presenters and about 300 attendees include representatives from major technology and nontechnology companies, academic researchers, ethicists, philosophers and members of think tanks and the legal profession.

Do Robots Deserve Rights? What if Machines Become Conscious?

What shall we do once machines become conscious? Do we need to grant them rights? Check out Wisecrack and their video: 'The ..

What is Artificial Intelligence (or Machine Learning)?

What is AI? What is machine learning and how does it work? You've probably heard the buzz. The age of artificial intelligence has arrived. But that doesn't mean ...

Designing for artificial intelligence | Karthik Mahadevan | TEDxDelftSalon

Artificial Intelligence as it has come into the foray in the recent years is a lot more focused on automation. The best of minds right now working in the field are ...

Artificial Intelligence in Healthcare - The Need for Ethics | Varoon Mathur | TEDxUBC

The advent of artificial intelligence (AI) promises to revolutionize the way we think about medicine and healthcare, but whom do we hold accountable when ...

A Conversation about Conversational AI | Tom Gruber | TEDxBeaconStreet

In this conversation between Siri designer Tom Gruber and NY Times writer John Markoff, we hear about the state of the art in Conversational Artificial ...

Artificial Intelligence in Healthcare – It’s about Time | Casey Bennett | TEDxNashville

We need tools that can help us – both clinicians and patients – make better healthcare decisions. Yet in order to do so, those tools need to “think like we do” ...

Artificial Intelligence

Should we be scared of artificial intelligence and all it will bring us? Not so long as we remember to make sure to build artificial emotional intelligence into the ...

Why creating AI that has free will would be a huge mistake | Joanna Bryson

AI expert Joanna Bryson posits that giving artificial intelligence the same rights a human has could result in pretty dire consequences... because AI has already ...

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs ...

AI "Stop Button" Problem - Computerphile

How do you implement an on/off switch on a General Artificial Intelligence? Rob Miles explains the perils. Part 1: ...