AI News, BOOK REVIEW: Would You Trust a Robot to Give Your Grandmother Her Meds?

Would You Trust a Robot to Give Your Grandmother Her Meds?

Why do we get nervous when we think about robots working among us instead of tethered to the factory floor?

The machines and systems we’re reporting on in this issue will need to be able to make decisions based not only on gigantic data sets but on human behaviors and the social rules, customs, laws, and values we use to navigate our world.

They’ll need to be able to figure out the “right” thing to do, in real time, the way any one of us could, in order to swerve around a jaywalking pedestrian, perform a life-saving surgical maneuver, or distinguish friendly citizens from enemy soldiers in a war zone, all without a helping human hand.

To build robots that inspire trust, roboticists are working with psychologists, sociologists, linguists, anthropologists, and other scientists to understand a lot more about what makes us trustworthy and reliable in all the different roles we play.

The furor presents us with a marvelous opportunity to get things right and not repeat the mistakes made with respect to privacy and security, accountability and human rights, in the headlong rush to create the modern Internet.

Can we trust robots to make moral decisions?

Last week, Microsoft inadvertently revealed the difficulty of creating moral robots.

points out that in hospitals, APACHE medical systems help determine the best treatments for patients in intensive care units—often those who are at the edge of death.

The first is to decide on a specific ethical law (maximize happiness, for example), write a code for such a law, and create a robot that strictly follows the code.

“We still argue as human beings about the correct moral framework we should use, whether it’s a consequentialist utilitarian means-justify-the-ends approach, or a Kantian deontological rights-based approach.”

This is similar to how humans learn morality, though it raises the question of whether humans are, in fact, the best moral teachers.

His work relies a great deal on top-down coding and less so on machine learning—after all, you wouldn’t want to send someone into a military situation and leave them to figure out how to respond.

Meanwhile, Susan Anderson, a philosophy professor at the University of Connecticut, is working with her husband Michael Anderson, a computer science professor at the University of Hartford, to develop robots that can provide ethical care for the elderly.

In one case, they created an intelligent system to decide on the ethical course of action when a patient had refused the advised treatment from a healthcare worker.

This involves many complicated ethical duties, including respect for the autonomy of the patient, possible harm to the patient, and possible benefit to the patient.

Anderson found that once the robot had been taught a moral response to four specific scenarios, it was then able to generalize and make an appropriate ethical decision in the remaining 14 cases.

From this, she was able to derive the ethical principle: “That you should attempt to convince the patient if either the patient is likely to be harmed by not taking the advised treatment or the patient would lose considerable benefit.

Although in that early work, the robot was first coded with simple moral duties—such as the importance of preventing harm—the Andersons have since done work where no ethical slant was assumed.

Improving morality through robots But though we may not want to leave the most advanced ethical decisions to machines just yet, work on robotic ethics is advancing our own understanding of morality.

Anderson points out that the history of ethics shows a steadily building consensus—and work on robot ethics can contribute to refining moral reasoning.

“Because we were talking about whether robots could morally reason, it forced us to look at capabilities humans have that we take for granted and what role they may have in making ethical decisions,”

It might simply be impossible to reduce human ethical decision making into numerical values for robots to understand, says Lin—how do we codify compassion or mercy, for example.

When to Trust Robots with Decisions, and When Not To

Smarter and more adaptive machines are rapidly becoming as much a part of our lives as the internet, and more of our decisions are being handed over to intelligent algorithms that learn from ever-increasing volumes and varieties of data.

Moving to the right, credit card fraud detection and spam filtering have higher levels of predictability, but current-day systems still generate significant numbers of false positives and false negatives.

However, while it may be tempting to limit the analysis to discussions of predictive power and infer that “high signal problems can be robotized and low signal ones require humans,” this one-dimensional view is incomplete.

Spam filtering is a tricky “adversarial” problem where spammers try to fool the filter but the filter is tuned to not block legitimate content, so the cost of false positives should be very low albeit with some spam getting through which is also low.

The costs for fighter drone decisions (middle right) are also clearly high (accidentally bombing a hospital instead of a munitions depot, say), but this problem differs from driverless cars in at least two ways: Drones are used in warfare, where there’s more tolerance for errors than on suburban roadways, and using them mitigates the substantial risk to pilots flying over enemy territory.

Below the automation frontier, we see several problems, such as high frequency trading and online advertising, which have already been automated to a large degree due to the low cost per error relative to the benefits of reliable and scalable decision making.

In contrast, above the frontier, we find that even the best current diabetes prediction systems still generate too many false positives and negatives, each with a cost that is too high to justify purely automated use.

On the other hand, the availability of genomic and other personal data could improve prediction accuracy dramatically (long orange horizontal arrow) and create trustworthy robotic healthcare professionals in the future.

For example, as driverless cars improve and we become more comfortable with them, the introduction and resolution of laws limiting their liability could facilitate the emergence of insurance markets that should drive down the cost of error.

Perhaps the biggest challenge for the deployment of data-driven learning machines is the uncertainty associated with how they will deal with “edge cases” that are encountered for the first time, such as the obstacles encountered by Google’s driverless car that caused a minor accident.

Statistically, self-driving cars are about to kill someone. What happens next?

There are good arguments for why some ethical decisions ought to be left to computers—unlike human beings, machines are not led astray by cognitive biases, do not experience fatigue, and do not feel hatred toward an enemy.

This distaste for consequentialism has been demonstrated across a number of psychological studies in which participants are given hypothetical moral dilemmas that pit consequentialism against a more rule-based morality.

In the “footbridge dilemma” for instance, participants are told a runaway train is set to kill five innocent people who are stuck on the train tracks.

Its progress can be stopped with certainty by pushing a very large man, who happens to be standing on a small footbridge overlooking the tracks, to his death below (where his body will stop the trolley before it can kill the other five).

This is precisely what we found: across 9 experiments, with more than 2400 participants, people who favoured the rule-based approach to a number of sacrificial moral dilemmas (including the footbridge dilemma) were seen as more trustworthy than those who based their judgments on the consequences of an action.

In an economic game designed to assess trust, we found that participants entrusted more money, and were more confident that they would get it back, when dealing with someone who refused to sacrifice people for the greater good compared to someone who made moral decisions based on consequences.

In these cases, the presence of decisional conflict served as a positive signal about the person, perhaps indicating that despite her decision, she felt the pull of moral rules.

In our fellow humans, we prefer an (arguably) irrational commitment to certain rules no matter what the consequences, and we prefer those whose moral decisions are guided by social emotions like guilt and empathy.

And indeed, a recent Human Rights Watch report argued for a moratorium on research aiming to create “Killer Robots” because such robots would not feel the “natural inhibition of humans not to kill or hurt fellow human beings”.

The Air Force Wants You to Trust Robots--Should You?

A British fighter jet was returning to its base in Kuwait after a mission on the third day of the 2003 Iraq War when a U.S. anti-missile system spotted it, identified it as an enemy missile, and fired.

Indeed, it’s become more pressing as the military comes to rely more and more on automation, and spends huge sums of money researching and developing artificial intelligence.

Heather Roff, a professor at the University of Colorado who studies ethics and military technology, says those friendly fire incidents highlight what experts call automation bias.“There’s a pop-up screen that says: if you take no action I will fire,”

Daryl Mayer, an Air Force spokesman, tells Vocativ that the work they’re doing is centered around how humans use machines.“Our research centers on the trust calibration process, so rather than focus on simply ‘more’

And what if someone just woke up from a nap and sees a truck in the oncoming lane that poses no threat but the person’s natural reaction is to swerve violently away?

Onestudy found thatsome soldiers who used explosive-disposal robots “formed such a strong bond with their explosive- disposal robots that they insist getting the same robot back after it is repaired or become sad if their damaged robot cannot be repaired.”

Asaro isn’t concerned about lying robots, but he does note that robots mightbe able to get humans to do something they don’t want to do, including, perhaps, things that many people would see as positive, like getting elderly people to take needed medications.

“We are probably a long way from robots that could trick people into doing things that are good for them—they would need much better capabilities for reading social cues, using social cues, as well as deciphering and manipulating human desires and incentives,”

he says, “if you spill wine on your carpet and your house-cleaning robot starts recommending specific products to clean it, is that because it is the best cleaning method or is it due to a commercial agreement between the robot manufacturer and the carpet stain remover manufacturer?

Most Advanced A.I. Robot Admits It Wants to Destroy Humans After Glitch During TV Interview

Sofia, the world's most advanced A.I. humanoid robot had an emberresing glitch during an interview with CNBC when "she" admitted she wants to destroy humans. While many are laughing off the...

2017 Audi AI - Test Drive with Humanoid Robot Sophia

2017 Audi AI - Test drive Jack and Sophia. Prof. Rupert Stadler Chairman of the Board of Management of AUDI AG United Nations “AI for Good Global Summit” Keynote Speech I was invited...

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk...

Robocop Like Sentry Robot Guards South Korean Border

A Samsung Group subsidiary has worked on a robot sentry that they call the SGR-A1, and this particular robot will carry a fair amount of weapons that ought to make you think twice about crossing...

Military Robots

What do advancements in AI mean for the military? Military robotics has come a long way with advancements in machine learning, the soaring affordability of computing power, and the rise of...

DEF CON 24 - Jianhao Liu, Chen Yan, Wenyuan Xu - Can You Trust Autonomous Vehicles?

To improve road safety and driving experiences, autonomous vehicles have emerged recently, and they can sense their surroundings and navigate without human inputs. Although promising and proving...

Teaching robots to be more human

Danica Kragic Jensfelt and her research group at KTH Royal Institute of Technology are developing robots that learn new tasks on their own, that can understand an object is a cup even though...

Slaughterbots

If this isn't what you want, please take action at: Originally posted here:

Robot Tries to Escape from Children's Attack

This video is part of “Escaping from Children's Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics...

iml robots - stack mould - www.cildan.com.tr

Çildan Mechatronics Industry & Trade Limited, in 1973, was established in Istanbul 4.Levent Industrial Auto Complex, which is one of Turkey's first industrial complex. It began using the...