AI News, Why Should We Ban Autonomous Weapons? To Survive

Why Should We Ban Autonomous Weapons? To Survive

The major powers are developing autonomous missiles and drones that will hunt ships, subs, and tanks, and piecing together highly automated battle networks that will confront each other and have the capability of operating without human control.

Although human error appears to have played the deciding role in that incident, part of the problem was excessive reliance on complex automated systems under time pressure and uncertain warnings of imminent danger—the classic paradigm for “accidental war.” At the time, as an intern at the Federation of American Scientists in Washington, D.C., I was looking at nanotechnology and the rush of new capabilities that would come as we learn to build ever more complex systems with ever smaller parts.

I also knew that unless we resolved not to cross that line, we would soon enter an era in which, once the fighting had started, the complexity and speed of automated combat, and the delegation of lethal autonomy as a military necessity, would put the war machines effectively beyond human control.

Systems like the CAPTOR mine, designed to autonomously launch a homing torpedo at a passing submarine, and the LOCAAS mini-cruise missile, designed to loiter above a battlefield and search for tanks or people to kill, were canceled or phased out.

As late as 2013, a poll conducted by Charli Carpenter, a political science professor at the University of Massachusetts Amherst, found Americans against using autonomous weapons by 2-to-1, and tellingly, military personnel were among those most opposed to killer robots.

In a 2004 article, Juergen Altmann and I declared that “Autonomous ‘killer robots’ should be prohibited” and added that “a human should be the decision maker when a target is to be attacked.” In 2009, Altmann, a professor of physics at Technische Universität Dortmund, co-founded the International Committee for Robot Arms Control, and at its first conference a year later, I suggested human control as a fundamental principle.

Many statements at the CCW have endorsed human control as a guiding principle, and Altmann and I have suggested cryptographic proof of accountable human control as a way to verify compliance with a ban on autonomous weapons.

In 2012, the Obama administration, via then-undersecretary of defense Ashton Carter, directed the Pentagon to begin developing, acquiring, and using “autonomous and semi-autonomous weapon systems.” Directive 3000.09 has been widely misperceived as a policy of caution;

The policy has not stood in the way of programs such as the Long Range Anti-Ship Missile, slated for deployment in 2018, which will hunt its targets over a wide expanse, relying on its own computers to discriminate enemy ships from civilian vessels.

Weapons like this are classified as merely “semi-autonomous” and get a green light without certification, even though they will be operating fully autonomously when they decide which pixels and signals correspond to valid targets, and attack them with lethal force.

Every technology needed to acquire, track, identify, and home in or control firing on targets can be developed and used in “semi-autonomous weapon systems,” which can even be sent on hunt-and-kill missions as long as the quarry has been “selected by a human operator.” (In case you’re wondering, “target selection” is defined as “The determination that an individual target or a specific group of targets is to be engaged.”) It’s unclear that the policy stands in the way of anything.

Yet he worries that adversaries may field fully autonomous weapon systems, and says the U.S. may need to “delegate authority to machines” because “humans simply cannot operate at the same speed.” Work admits that the United States has no monopoly on the basic enabler, information technology, which today is driven more by commercial markets than by military needs.

Our experience with the unpredictable failures and unintended interactions of complex software systems,particularly competitive autonomous agents designed in secrecy by hostile teams, serves as a warning that networks of autonomous weapons could accidentally ignite a war and, once it has started, rapidly escalate it out of control.

Paul Scharre, one of the architects of Directive 3000.09, has suggested that the risk of autonomous systems acting on their own could be mitigated by negotiating “rules of the road” and including humans in battle networks as “fail-safes.” But it’s asking a lot of humans to remain calm when machines indicate an attack underway.

Banning autonomous weapons and asserting the primacy of human control isn’t a complete solution, but it is probably an essential step to ending the arms race and building true peace and security.

Ban on killer robots urgently needed, say scientists

The short, disturbing film is the latest attempt by campaigners and concerned scientists to highlight the dangers of developing autonomous weapons that can find, track and fire on targets without human supervision.

The manufacture and use of autonomous weapons, such as drones, tanks and automated machine guns, would be devastating for human security and freedom, and the window to halt their development is closing fast, Russell warned.

While military drones have long been flown remotely for surveillance and attacks, autonomous weapons armed with explosives and target recognition systems are now within reach and could locate and strike without deferring to a human controller.

Because AI-powered machines are relatively cheap to manufacture, critics fear that autonomous weapons could be mass produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc, as the movie portrays.

The open letter, signed by Tesla’s chief executive, Elon Musk, and Mustafa Suleyman, the founder of Alphabet’s Deep Mind AI unit, warned that an urgent ban was needed to prevent a “third revolution in warfare”, after gunpowder and nuclear arms.

“There is an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions.

It will only take one major war to unleash these new weapons with tragic humanitarian consequences and destabilisation of global security.” Criminals and activists have long relied on masks and disguises to hide their identities, but new computer vision techniques can essentially see through them.

“The UK is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control,” a Foreign Office spokesperson said at the time.

Autonomous Weapons: an Open Letter from AI Robotics Researchers

They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.

Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits.

Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

'Killer robots': autonomous weapons pose moral dilemma

The United Nations began talks on Monday on lethal autonomous weapons systems amid calls for an international ban on these 'killer robots' that could change the nature of warfare.

'The deadly consequence of this is that machines — not people — will determine who lives and dies.'  While the rapid development of artificial intelligence and robotics in the past decade have led to improvements for consumers, the transport sector and human health, the military application of greater autonomy in weapons systems has evoked images of Terminator-type sci-fi war machines entering the battlefield to hunt down adversaries without any human behind the controls.

In some cases, such as with cruise missiles, its sensors and terrestrial guidance systems can lead to more targeted strikes and fewer unintended casualties compared to traditional bombing.  But the experts who signed the August letter expressed moral concern over the development of fully autonomous weapons systems out of today's semi-autonomous and human-supervised autonomous systems.

'A smart armed system can become a dumb armed system quickly.'  A fully autonomous system gone awry could take unwanted action in complex battlefield situations, target civilians or engage in friendly fire. The possibility of mistakes with little or no human role also raises questions around the laws of war and military policy, such as who bears responsibility.  'To what extent can we hold a military commander that deploys such a system responsible, if there is no meaningful way for him or her to predict how it will behave?'

Still, he cautioned that given the pace of advances in science and technology the weapons capability of autonomous weapons in the future are difficult to predict, which may require legal and policy changes.  Several countries, including the United States, Russia, China and Israel are researching or developing lethal autonomous weapons systems out of concern adversaries may not be bound by humanitarian, moral and legal constraints, resulting in a potential 'killer robot' arms race in the years to come as the technology improves.  As Russian President Vladimir Putin said in September, whoever is the leader in artificial intelligence 'will become the ruler of the world.'

Arms Control Today

October 2016 By Frank Sauer Autonomous weapons systems have drawn widespread media attention, particularly since last year’s open letter signed by more than 3,000 artificial intelligence (AI) and robotics researchers warning against an impending “military AI arms race.”1 Since 2013, discussion of such weapons has been climbing the arms control agenda of the United Nations.

They are a topic at the Human Rights Council and the General Assembly First Committee on disarmament and international security, but the main venue of the debate is the Convention on Certain Conventional Weapons (CCW) in Geneva.2 So far, CCW countries have convened for three informal meetings of experts on the topic and in December will decide whether to continue and deepen their deliberations by establishing a group of governmental experts next year.

There arguably is a tacit understanding in the expert community and among diplomats in Geneva that the debate’s main focus is on future, mobile weapons platforms equipped with onboard sensors, computers, and decision-making algorithms with the capability to seek, identify, track, and attack targets autonomously.

They are supposed to allow for greater restraint and also better discrimination between civilians and combatants, resulting in an application of force in strict or stricter accordance with international humanitarian law.5 Problems With Autonomy In light of these anticipated benefits, one might expect militaries to unequivocally welcome the introduction of autonomous weapons systems.

The potential for high-tempo fratricide, much greater than at human intervention speeds, incentivizes militaries to retain humans in the chain of decision-making as a fail-safe mechanism.6  Above and beyond such tactical concerns, these systems threaten to introduce a destabilizing factor at the strategic level.

An uncontrolled escalation from crisis to war is entirely within the realm of possibilities.7  Human decision-making in armed conflict requires complex assessments to ensure a discriminate and proportionate application of military force in accordance with international humanitarian law.

closely related aspect is that it remains unclear who would be legally accountable if civilians were unlawfully injured or killed by autonomous weapons systems, especially because targeting processes in modern militaries are such an immensely complex, strategic, multilevel endeavor.

An artificially intelligent system tasked with autonomous targeting would thus not only need to replace various human specialists, creating what has become known as the “accountability gap” because a machine cannot be court-martialed, it would essentially require a human abdication of political decision-making.9  Leaving military, legal, and political considerations aside moves a more fundamental problem into focus.

From an ethical point of view, it is argued that autonomous weapons systems violate fundamental human values.10 Delegating the decision to kill a human to an algorithm in a machine, which is not responsible for its actions in any meaningful ethical sense, can arguably be understood to be an infringement on basic human dignity, representing what in moral philosophy is known as a malum in se, a wrong in itself.

This peculiar consideration is reflected in the public’s deep concerns in the United States and internationally regarding autonomy in weapons systems.11 In sum, there are many reasons—military, legal, political, ethical—for engaging in preventive arms control measures regarding autonomous weapons systems.

CCW deliberations on cluster munitions failed in 2011 to produce an outcome, leaving the 2008 Convention on Cluster Munitions, created outside CCW and UN auspices, as the sole international instrument to specifically regulate these weapons.  Yet, so far autonomous weapons systems have been the subject of exceptionally dynamic talks and climbed the CCW agenda with unprecedented speed.

On the other hand, the CCW has a fearsome reputation as a place where good ideas go to die a slow death.  The civil society movement pushing for a legally binding prohibition on autonomous weapons systems within the CCW framework is organized and spearheaded by the Campaign to Stop Killer Robots, a coalition of more than 61 groups in 26 countries coordinated by Human Rights Watch.

Thus, the argument that human control over life-and-death decisions must always be in place in a significant or meaningful fashion, as more than just a mindless pushing of a button by a human in response to a machine-processed stream of information.  According to current practice, a human weapons operator must have sufficient information about the target and sufficient control over the weapon and must be able to assess its effects in order to be able to make decisions in accordance with international law.

The first would be a legally binding and preventive multilateral arms control agreement derived by consensus in the CCW and thus involving the major stakeholders, the outcome referenced as “a ban.” Considering the growing number of states-parties calling for a ban and the large number of governments calling for meaningful human control and expressing considerable unease with the idea of autonomous weapons systems, combined with the fact that no government is openly promoting their development, this seems possible.

Implementing autonomy, which mainly comes down to software, in systems drawn from a vibrant global ecosystem of unmanned vehicles in various shapes and sizes is a technical challenge, but doable for state and nonstate actors, particularly because so much of the hardware and software is dual use.

An unchecked autonomous weapons arms race and the diffusion of autonomous killing capabilities to extremist groups would clearly be detrimental to international peace, stability, and security.  This underlines the importance of the current opportunity for putting a comprehensive, verifiable ban in place.

Although this process holds important lessons, for instance regarding the valuable input that epistemic communities and civil society can provide, it also raises vexing questions, particularly if and how arms control will find better ways for tackling issues from a qualitative rather than quantitative angle.  The autonomous weapons systems example points to a future in which dual-use reigns supreme and numbers are of less importance than capabilities, with the weapons systems to be regulated, potentially disposable, 3D-printed units with their intelligence distributed in swarms.

States can use the upcoming CCW review conference in December to go above and beyond the recommendation from the 2016 meeting on lethal autonomous weapons systems and agree to establish an open-ended group of governmental experts with a strong mandate to prepare the basis for new international law, preferably via a ban.

The currently nascent social taboo against machines autonomously making kill decisions meets all the requirements for spawning a “humanitarian security regime.”15  Autonomous weapons systems would not be the first instance when an issue takes an indirect path through comparably softer social international norms and stigmatization to a codified arms control agreement.

  The so-called first offset was the use of nuclear deterrence against the large conventional forces of the Soviet Union starting in the 1950s, and the second offset in the 1970s and 1980s was the fielding of precision munitions and stealth technologies to counter the air and ground forces of adversaries.

MICRO DRONES KILLER ARMS ROBOTS - AUTONOMOUS ARTIFICIAL INTELLIGENCE - WARNING !!

Killer drone arms, articial intelligence an increasingly real fiction, Social and Smart Phone Facial Recognition, Smart swarms, Warning ! SUBSCRIBE OUR CHANNEL It's hard to believe how far...

The Dawn of Killer Robots (Full Length)

Subscribe to Motherboard Radio today! In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb disposal robots—the same platforms the..

"Slaughterbot" Autonomous Killer Drones | Technology

Perhaps the most nightmarish, dystopian film of 2017 didn't come from Hollywood. Autonomous weapons critics, led by a college professor, put together a horror show. It's a seven-minute video,...

KILLER ROBOTS ARE COMING: Google & Tesla Beg Awareness

See more at Elon Musk and Mustafa Suleyman have written an open letter urging the UN to block the use of lethal autonomous weapons to prevent the third age of war! The..

Fictional 'Slaughterbots' film warns of autonom...

In 'Slaughterbots,' autonomous drones use artificial intelligence to decide who to kill. The short film was commissioned by college professors and researchers to warn the public of the dangers...

Air Force Bugbot Nano Drone Technology

Air Force Bugbots Nano Drone video gives a peak inside what nano-drone technology the Federal Government is currently implementing within the united states more than a scary thought or sci-fi...

AI arms race fuels fears of killer robots

The first-ever UN meeting on lethal autonomous weapons systems focused on the implications of military use of artificial intelligence. Human rights groups and other tech leaders warn that AI...

Killer Robots - Scientists Call For Ban

Scientists Call For Ban On Killer Robots A disturbing and highly horrific short film profiles the imminent danger of killer robots. The film is a tool scientists are using to call for a ban...

A licence to kill for autonomous weapons?

Autonomous weapons are an emotive subject, with the potential to change the whole nature of warfare. Could machines one day be able to carry out killing without human control and what should...

Meet the dazzling flying machines of the future | Raffaello D'Andrea

When you hear the word "drone," you probably think of something either very useful or very scary. But could they have aesthetic value? Autonomous systems expert Raffaello D'Andrea develops...