AI News, How to Build a Moral Robot

How to Build a Moral Robot

Whether it’s in ourcars, our hospitals or our homes, we’ll soon depend upon robots to make judgement calls in which human lives are at stake.

In order to pull it off, they’ll need to answer some important questions: How can we quantify the fuzzy, conflicting norms that guide human choices?

Or imagine a rescue robot that detects two injured people in the rubble of an earthquake, but knows it doesn’t have time to save both.

If autonomous robots are going to hang with us, we’re going to have to teach them how to behave—which means finding a way to make them aware of the values that are most important to us.

Matthias Scheutz is computer scientist at Tufts who studies human robot interaction—and he’s trying to figure out how to model moral reasoning in a machine.

Even as humans, we don’t really have any concrete rules about what’s right and wrong—at least, not ones we’ve managed to agree upon.

MATTHIAS SCHEUTZ:Right now the major challenge for even thinking about how robots might be able to understand moral norms is that we don’t understand on the human side how humans represent and reason if possible with moral norms.

They’ve started by compiling a list of words, ideas and rules that people use to talk about morality—a basic moral vocabulary.

One theory is that the human moral landscape might look a lot like a semantic network, with clusters of closely related concepts that we become more or less aware of depending on the situation.

MALLE:Our hypothesis is that in any particular context, a subset of norms is activated—a particular set of rules related to that situation.

Malle starts off by picking a scenario—say, a day at the beach—and asking a whole bunch of people how they think they’re supposed to behave.

The order in which the participants mention certain rules, the number of times they mention them, and the time it takes between mentioning one idea and another—those are all concrete values.

By collecting data from enough different situations, Malle thinks he’ll be able to build a rough map of a human norm network.

He can pull a lever that would switch the train onto the second track, saving the passengers in the trolley but killing the repairman.

NARRATOR:Malle presents this scenario a few different ways: some of the participants watch a human make the decision, some see a humanoid robot, and some see a machine-like robot.

Moral Competence in Computational Architectures for Robots

first results of human perceptions of emotional intelligence in humans compared to robots proceedings of the seventeenth international conference on intelligent virtual agents vasanth sarathy and matthias scheutz and bertram malle (2017) learning behavioral norms in uncertain and changing contexts proceedings of the 2017 8th ieee international conference on cognitive infocommunications (coginfocom) matthias scheutz and scott deloach and julie adams (2017) a framework for developing and using shared mental models in human-agent teams journal of cognitive engineering and decision making, 11, 3, 203--224

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk...

Robot warriors: technology and the regulation of war | Professor Noam Lubell | TEDxUniversityofEssex

This talk was given at a local TEDx event, produced independently of the TED Conferences. War hasn't gone away, but the way we fight has been fundamentally transformed by technological developmen...

Genetic Engineering Will Change Everything Forever – CRISPR

Designer babies, the end of diseases, genetically modified humans that never age. Outrageous things that used to be science fiction are suddenly becoming reality. The only thing we know for...

Are You Multitasking Your Life Away? Cliff Nass at TEDxStanford

Clifford Nass gleans powerful strategies to help people build relationships by watching what succeeds and what fails in human interfaces with technology. In his research at Stanford, Clifford...

Why you shouldn't drive slowly in the left lane

Can we all agree that the left lane is for passing, please? Read more here: Subscribe to our channel! is a news website that helps you cut through..

Mr. Robot | Making Vigilantes

Pattern Theory examines vigilantism in USA's Mr. Robot, why Elliot and Darlene started fsociety, and why revolution isn't for everyone. Twitter: @Apatterntheory Email:

TRANSHUMANISM (Full): Techno-eugenics for the neo-feudal age -- Tom Horn interview, by Alex Ansary

"Ye shall be as God", or so the story goes. «We need a program of psychosurgery for political control of our society. The purpose is physical control of the mind. Everyone who deviates...

Economic Update: Morality and Economics

This show is available at no cost to public access and non-profit community stations! Contact your local channels and let them know you would like them to add Economic Update to their programming....