AI News, How to Build a Moral Robot

How to Build a Moral Robot

Whether it’s in ourcars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices?

Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.

If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon.

MATTHIAS SCHEUTZ:Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary.

One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

MALLE:Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation.

Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave.

The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values.

By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network.

He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the repairman.

NARRATOR:Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot.

Moral Competence in Computational Architectures for Robots

first results of human perceptions of emotional intelligence in humans compared to robots proceedings of the seventeenth international conference on intelligent virtual agents vasanth sarathy and matthias scheutz and bertram malle (2017) learning behavioral norms in uncertain and changing contexts proceedings of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom) matthias scheutz and scott deloach and julie adams (2017) a framework for developing and using shared mental models in human-agent teams journal of cognitive engineering and decision making, 11, 3, 203--224

Ethics and Artificial Intelligence: The Moral Compass of a Machine

We worry about the machine’s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon.

A self-driving car that plows into a crowd of people because its sensors fail to register them isn’t any more unethical than a vehicle that experiences unintended acceleration.

This is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.

Finally, this is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.

I would put this kind of example in the category of “badly designed.” And given that most of the systems that manage printer queues in our offices are smarter than a system that would tend to do this, it is probably not something that should concern us.

Given a rule that states that you should never kill anyone, it is pretty easy for a machine (or person for that matter) to know that it is wrong to murder the owner of its local bodega, even if it means that it won’t have to pay for that bottle of Chardonnay.

They provide a simple value ranking that — on the face of it, at least — seems to make sense: The place where both robots and humans run into problems is situations in which adherence to a rule is impossible, because all choices violate the same rule.

Likewise, the victims on the first track could all be terminally ill with only days to live, or could all be convicted murderers who were on their way to death row before being waylaid.

In each of these cases, we begin to consider different ways to evaluate the trade-offs, moving from a simple tallying up of survivors to more nuanced calculations that take into account some assessment of their “value.” Even with these differences, the issue still remains one of a calculus of sorts.

As a result, although most people would pull the switch, those same people resist the idea of pushing their fellow commuter to his or her doom to serve the greater good.

Of course, as we increase the number of people on the track, there is a point at which most of us think that we will overcome our horror and sacrifice the life of the lone commuter in order to save the five, 10, one hundred or one thousand victims tied to the track.

Our determination of what is right or wrong becomes complex when we mix in emotional issues related to family, friends, tribal connections, and even the details of the actions that we take.

The metrics are ours to choose, and can be coarse-grained (save as many people as possible), nuanced (women, children and Nobel laureates first) or detailed (evaluate each individual by education, criminal history, social media mentions, etc.).

The same holds for lawyers, religious leaders and military personnel who establish special relationships with individuals that are protected by specific ethical code.

It would not seem unreasonable for a machine to respond for a request for personal information by saying “I am sorry but he is my patient and that information is protected.” In much the same way that Apple protected its encryption in the face of homeland security, it follows that robotic doctors will be asked to be HIPAA compliant.

In response, Smith screams, “Save the girl!” and the robot, demonstrating its newly learned humanity, turns its back on the primary goal and focuses on saving the girl.

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs ...

Genetic Engineering Will Change Everything Forever – CRISPR

Designer babies, the end of diseases, genetically modified humans that never age. Outrageous things that used to be science fiction are suddenly becoming ...

Metaethics: Crash Course Philosophy #32

We begin our unit on ethics with a look at metaethics. Hank explains three forms of moral realism – moral absolutism, and cultural relativism, including the ...

Global Ethics Forum: The Pros, Cons, and Ethical Dilemmas of Artificial Intelligence

From driverless cars to lethal autonomous weapons, artificial intelligence will soon confront societies with new and complex ethical challenges, says Yale's ...

Miles Brundage Limitations and Risks of Machine Ethics FHI Winter Intelligence

Winter Intelligence 2012 Oxford University Video thanks to Adam Ford, .

My philosophy for a happy life | Sam Berns | TEDxMidAtlantic

Never miss a talk! SUBSCRIBE to the TEDx channel: Just before his passing on January 10, 2014, Sam Berns was a Junior at Foxboro High ..

Dominik Bösl about Disruptive Technologies and their Impact on the Age of Digitalization

Dominik Bösl of KUKA (Germany) spoke on Robotic and AI governance and their impact on the age of digitalization. He also spoke about approaching the future ...

2016 Lecture 07 Maps of Meaning: Part I: Osiris, Set, Isis and Horus

One variant of the ancient Egyptian myth of Osiris and his compatriots serves to illustrate the archetypal substructure of narrative cognition. Understanding this ...

IELTS Speaking Part 3: topic "Parents" sample answer band 8.0+

PART 3: 1. What is the role of parents? How to be a good parents? Well, the role of parents? First of all, they have to provide the physical, emotional and ...