AI News, Teaching Robots the Rules of War

Teaching Robots the Rules of War

In May, we posted about a group of researchers from Georgia Tech who have been working on an “ethical governor” for military robots.

Dr. Ronald Arkin, director of Georgia Tech’s Mobile Robot Laboratory, was interviewed by H+ magazine on the subject, and we’ve got some choice excerpts below: In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” — in consultation with humans on the battlefield.

But I don’t believe there is any fundamental scientific limitation to achieving the goal of these machines being able to discriminate better than humans can in the fog of war, again in tightly specified situations.

This is pretty much exactly what we were saying back in February when the media freak-out of the week was killer robots: in a nutshell, you can program a robot soldier just as well as, and in some cases more effectively than, a human soldier in specific combat situations.

With some reluctance, we have engineered a human override capability into the system, but one which forces the operator to explicitly assume responsibility for any ethical infractions that might result as a consequence of such an override.

If you just take a step back and look at it logically, you realize that just like humans, robots can be taught to follow rules, obey regulations, and make ethical decisions… And they can probably do it more strictly, and reliably, than humans can.

Should a robot decide when tokill?

Gubrud, an accomplished academic, first proposed a ban on autonomous weapons back in 1988.

He’s typically polite, but talk of robotics brings out his combative side: he approached DARPA director Arati Prabhakar at one point during the challenge and tried to get her to admit that the agency is developing autonomous weapons.

He may have been the lone voice of dissent among the hundreds of robot-watchers at DARPA’s event, but Gubrud has some muscle behind him: the International Committee for Robot Arms Control (ICRAC), an organization founded in 2009 by experts in robotics, ethics, international relations, and human rights law.

If robotics research continues unchecked, ICRAC warns, the future will be a dystopian one in which militaries arm robots with nuclear weapons, countries start unmanned wars in space, and dictators use killer robots to mercilessly control their own people.

One recent report from the US Air Force notes that 'by 2030 machine capabilities will have increased to the point that humans will have become the weakest component in a wide array of systems and processes.'

'If we can protect innocent civilian life, I do not want to shut the door on the use of this technology,' says Ron Arkin, PhD, a roboticist and ethicist at the Georgia Institute of Technology who has collaborated extensively with Pentagon agencies on various robotics systems.

Arkin proposes that an 'ethical governor,' a set of rules that approximates an artificial conscience, could be programmed into the machines in order to ensure compliance with international humanitarian law.

But at least for now, that means being able to process the command 'take a step' versus 'lift the right foot 2 inches, move it forward 6 inches, and set it down.'

Teaching Robots the Rules of War

Cyclone announced that it had completed the first stage of development for a beta biomass engine system used to power RTI’s Energetically Autonomous Tactical Robot (EATR™) –- a most unfortunate choice of name for a grass eater.

is a software architecture that provides, “ethical control and a reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement.”

Rather than guiding a missile to its intended target, Arkin’s robotic guidance system is being designed to reduce the need for humans in harm’s way, "…

appropriately designed military robots will be better able to avoid civilian casualties than existing human war fighters and might therefore make future wars more ethical."

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing”

These are for the so-called war after next, and the DoD would need to conduct extensive additional research in order to develop the accompanying technology to support the proof-of-concept work I have developed.

But I don’t believe there is any fundamental scientific limitation to achieving the goal of these machines being able to discriminate better than humans can in the fog of war, again in tightly specified situations.

h+: How does the process of introducing moral robots onto the battlefield get bootstrapped and field tested to avoid serious and potentially lethal "glitches"

It likely would involve the military’s battle labs, field experiments, and force-on-force exercises to evaluate the effectiveness of the ethical constraints on these systems prior to their deployment, which is fairly standard practice.

This can be minimized, I believe, by the use of bounded morality –- limiting their deployment to narrow, tightly prescribed situations, and not for the full spectrum of combat.

With some reluctance, we have engineered a human override capability into the system, but one which forces the operator to explicitly assume responsibility for any ethical infractions that might result as a consequence of such an override.

So from my point of view, the dangers posed are not unlike those for any new battlefield advantage, whether it be gunpowder, the crossbow, or other similar inventions over the centuries.

Nonetheless I think it is essential that international discussions should be held regarding what is acceptable/unacceptable regarding the use of armed unmanned systems in an early stage in their development (that is…

They will conduct specialized operations (for example, building clearing, counter sniper operations, and so forth) that will provide an asymmetric advantage to our war fighters.

I often said these are just baby steps towards the goal of unmanned systems being able to outperform human soldiers from an ethical standpoint, and not simply from the metric of a body count.

Are autonomous robots the future of warfare? Experts warn of the dangers of using 'smart' weapons on the battlefield

At a recent meeting, researchers said they were concerned these war machines could engage in unethical behavior and become a playground for hackers.  Scroll down for video Unlike today’s drones, which are entirely controlled by humans, autonomous weapons in the future could potentially select and engage targets on their own.

'Most of us believe that we don't have the ability to build ethical robots.' 'What is especially worrying is that the various militaries around the world will be fielding robots in just a few years, and we don't think anyone will be building ethical robots.'  In order to build an 'ethical' robot, researchers can programme them with specific rules will be a way to keep them from going rogue.

'Most systems are still fire and forget and even the advanced systems are designed not to choose a target, but to correct to hit the target.' Scharre, who gave a press note at the World Economic Forum, also mentioned that even though autonomous weapons are not forbidden in war, it will be a challenge to create ones that comply with accepted rules of engagement.