AI News, How to Build a Moral Robot

How to Build a Moral Robot

Whether it’s in ourcars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices?

Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.

If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon.

MATTHIAS SCHEUTZ:Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary.

One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

MALLE:Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation.

Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave.

The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values.

By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network.

He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the repairman.

NARRATOR:Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot.

Moral Competence in Computational Architectures for Robots

of the thirty-second aaai conference on artificial intelligence bertram malle and stuti thapa magar and matthias scheutz (2017) ai

of the seventeenth international conference on intelligent virtual agents vasanth sarathy and matthias scheutz and bertram malle (2017) learning

of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom) matthias scheutz and scott deloach and julie adams (2017) a framework for developing and using shared mental models in human-agent teams journal

Ethics and Artificial Intelligence: The Moral Compass of a Machine

We worry about the machine’s lack of empathy, how calculating machines are going to know how to do the right thing, and even how we are going to judge and punish beings of steel and silicon.

A self-driving car that plows into a crowd of people because its sensors fail to register them isn’t any more unethical than a vehicle that experiences unintended acceleration.

This is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.

Finally, this is not about pathological examples such as hyperintelligent paper-clip factories that destroy all of humanity in single-minded efforts to optimize production at the expense of all other goals.

I would put this kind of example in the category of “badly designed.” And given that most of the systems that manage printer queues in our offices are smarter than a system that would tend to do this, it is probably not something that should concern us.

Given a rule that states that you should never kill anyone, it is pretty easy for a machine (or person for that matter) to know that it is wrong to murder the owner of its local bodega, even if it means that it won’t have to pay for that bottle of Chardonnay.

They provide a simple value ranking that — on the face of it, at least — seems to make sense: The place where both robots and humans run into problems is situations in which adherence to a rule is impossible, because all choices violate the same rule.

Likewise, the victims on the first track could all be terminally ill with only days to live, or could all be convicted murderers who were on their way to death row before being waylaid.

In each of these cases, we begin to consider different ways to evaluate the trade-offs, moving from a simple tallying up of survivors to more nuanced calculations that take into account some assessment of their “value.” Even with these differences, the issue still remains one of a calculus of sorts.

As a result, although most people would pull the switch, those same people resist the idea of pushing their fellow commuter to his or her doom to serve the greater good.

Of course, as we increase the number of people on the track, there is a point at which most of us think that we will overcome our horror and sacrifice the life of the lone commuter in order to save the five, 10, one hundred or one thousand victims tied to the track.

Our determination of what is right or wrong becomes complex when we mix in emotional issues related to family, friends, tribal connections, and even the details of the actions that we take.

The metrics are ours to choose, and can be coarse-grained (save as many people as possible), nuanced (women, children and Nobel laureates first) or detailed (evaluate each individual by education, criminal history, social media mentions, etc.).

The same holds for lawyers, religious leaders and military personnel who establish special relationships with individuals that are protected by specific ethical code.

It would not seem unreasonable for a machine to respond for a request for personal information by saying “I am sorry but he is my patient and that information is protected.” In much the same way that Apple protected its encryption in the face of homeland security, it follows that robotic doctors will be asked to be HIPAA compliant.

In response, Smith screams, “Save the girl!” and the robot, demonstrating its newly learned humanity, turns its back on the primary goal and focuses on saving the girl.

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs ...

Genetic Engineering Will Change Everything Forever – CRISPR

Designer babies, the end of diseases, genetically modified humans that never age. Outrageous things that used to be science fiction are suddenly becoming ...

Personhood: Crash Course Philosophy #21

Now that we've started talking about identity, today Hank tackles the question of personhood. Philosophers have tried to assess what constitutes personhood ...

Metaethics: Crash Course Philosophy #32

We begin our unit on ethics with a look at metaethics. Hank explains three forms of moral realism – moral absolutism, and cultural relativism, including the ...

Joshua Green: "Moral Tribes: Emotion, Reason, and the Gap Between Us and Them" | Talks at Google

Joshua Greene stops by the Googleplex for a conversation with Kent Walker. You can find Joshua's book on Google Play: . Our brains were ..

2017 Personality 06: Jean Piaget & Constructivism

In this lecture, I talk about the great developmental psychologist Jean Piaget, who was interested, above all, in the way that knowledge is generated and ...

Mr. Robot | Making Vigilantes

Pattern Theory examines vigilantism in USA's Mr. Robot, why Elliot and Darlene started fsociety, and why revolution isn't for everyone. Twitter: @Apatterntheory ...

Should Journalism Be Objective? Serial: Part 2 | Idea Channel | PBS Digital Studios

Viewers like you help make PBS (Thank you ) . Support your local PBS Member Station here: Tweet us

Yascha Mounk: "The People vs. Democracy" | Talks at Google

Yascha Mounk is a writer, academic and public speaker known for his work on the rise of populism and the crisis of liberal democracy. He is a Lecturer on ...

Rory Smead - "Indirect Reciprocity and the Evolution of 'Moral Signals"

Rory Smead, Logic and Philosophy of Science, UCI "Indirect Reciprocity and the Evolution of 'Moral Signals"