AI News, Self-driving cars may soon be able to make moral and ethical decisions as humans do
- On 5. juni 2018
- By Read More
Self-driving cars may soon be able to make moral and ethical decisions as humans do
The research, Virtual Reality experiments investigating human behavior and moral assessments, from The Institute of Cognitive Science at the University of Osnabrück, and published in Frontiers in Behavioral Neuroscience, used immersive virtual reality to allow the authors to study human behavior in simulated road traffic scenarios.
For example, a leading new initiative from the German Federal Ministry of Transport and Digital Infrastructure (BMVI) has defined 20 ethical principles related to self-driving vehicles, for example, in relation to behavior in the case of unavoidable accidents, making the critical assumption that human moral behavior could not be modeled.
Gordon Pipa, a senior author of the study, says that since it now seems to be possible that machines can be programmed to make human like moral decisions it is crucial that society engages in an urgent and serious debate, 'we need to ask whether autonomous systems should adopt moral judgements, if yes, should they imitate moral behavior by imitating human decisions, should they behave along ethical theories and if so, which ones and critically, if things go wrong who or what is at fault?'
- On 19. juni 2018
- By Read More
Can we trust robots to make moral decisions?
Last week, Microsoft inadvertently revealed the difficulty of creating moral robots.
points out that in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units—often those who are at the edge of death.
The first is to decide on a specific ethical law (maximize happiness, for example), write a code for such a law, and create a robot that strictly follows the code.
“We still argue as human beings about the correct moral framework we should use, whether it’s a consequentialist utilitarian means-justify-the-ends approach, or a Kantian deontological rights-based approach.”
This is similar to how humans learn morality, though it raises the question of whether humans are, in fact, the best moral teachers.
His work relies a great deal on top-down coding and less so on machine learning—after all, you wouldn’t want to send someone into a military situation and leave them to figure out how to respond.
Meanwhile, Susan Anderson, a philosophy professor at the University of Connecticut, is working with her husband Michael Anderson, a computer science professor at the University of Hartford, to develop robots that can provide ethical care for the elderly.
In one case, they created an intelligent system to decide on the ethical course of action when a patient had refused the advised treatment from a healthcare worker.
This involves many complicated ethical duties, including respect for the autonomy of the patient, possible harm to the patient, and possible benefit to the patient.
Anderson found that once the robot had been taught a moral response to four specific scenarios, it was then able to generalize and make an appropriate ethical decision in the remaining 14 cases.
From this, she was able to derive the ethical principle: “That you should attempt to convince the patient if either the patient is likely to be harmed by not taking the advised treatment or the patient would lose considerable benefit.
Although in that early work, the robot was first coded with simple moral duties—such as the importance of preventing harm—the Andersons have since done work where no ethical slant was assumed.
Improving morality through robots But though we may not want to leave the most advanced ethical decisions to machines just yet, work on robotic ethics is advancing our own understanding of morality.
Anderson points out that the history of ethics shows a steadily building consensus—and work on robot ethics can contribute to refining moral reasoning.
“Because we were talking about whether robots could morally reason, it forced us to look at capabilities humans have that we take for granted and what role they may have in making ethical decisions,”
It might simply be impossible to reduce human ethical decision making into numerical values for robots to understand, says Lin—how do we codify compassion or mercy, for example.
- On 19. juni 2021
Moral Machine - Human Perspectives on Machine Ethics
Website: A platform for public participation in and discussion of the human perspective on machine-made moral decisions Offer your ..
Stanford Seminar: Buildings Machines That Understand and Shape Human Emotion
CS547: Human-Computer Interaction Seminar AI's Final Frontier? Buildings Machines That Understand and Shape Human Emotion Speaker: Jonathan Gratch, ...
3 Brain Systems That Control Your Behavior: Reptilian, Limbic, Neo Cortex | Robert Sapolsky
Read more at BigThink.com: Follow Big Think here: YouTube: Facebook: Twitter: .
Steven Pinker: On Free Will
There's no such thing as free will in the sense of a ghost in the machine; our behavior is the product of physical processes in the brain rather ..
Moral Math of Robots: Can Life and Death Decisions Be Coded?
A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs ...
The Science of Compulsive Online Behavior | Mary Aiken
The average person checks their phone 200 times a day. It borders on addiction for some, but according to cyberpsychologist Mary Aiken there are easy ways to ...
Good Citizens Should Understand Behavioral Economics | Bill Wood | TEDxDeerfield
There is burgeoning demand in America for greater understanding of the field of Economics. A near-universal lack of sufficient knowledge about topics such as ...
The paradox of choice | Barry Schwartz
Psychologist Barry Schwartz takes aim at a central tenet of western societies: freedom of choice. In Schwartz's estimation, choice has made ..
PHILOSOPHY - Ethics: Moral Status [HD]
Jeff Sebo (N.I.H.) discusses the nature of moral status. What does it take for someone to be a subject of moral concern? Do they have to be human? Rational?
Public Choice Theory: Why Government Often Fails
Governments don't work the way most people think they do. Public choice theory explores how voters, politicians, and bureaucrats actually make decisions. Prof.