AI News, Shape Created with Sketch. Gadgets and tech news in pictures

Shape Created with Sketch. Gadgets and tech news in pictures

An artificial intelligence expert has called for countries to ban so-called “killer robots” before activists’ warnings against them become a reality.

In response to criticism about its speed of progress, he said, “I think we have to be careful in not emotionalising or dramatising this issue.” “[Artificial intelligence’s] potential to benefit humanity is enormous, even in defense,” said Stuart Russell, a professor of computer science at the University of Berkeley, who featured in the film released by the Campaign to Stop Killer Robots.

We have an opportunity to prevent the future you just saw, but the window to act is closing fast.” Earlier this month, hundreds of AI experts urged the Canadian and Australian governments to treat autonomous weapons in the same way as chemical biological and nuclear weapons, arguing that delegating life-or-death decisions to machines crosses a moral line, and must not be allowed to happen.

Hundreds of A.I. experts echo Elon Musk, Stephen Hawking in call for a ban on killer robots

The development in AI technology has the potential to help society in any number of areas, including transportation, education, health, the arts, the military and medicine, the letter points out.

'It is for these reasons that Canada's AI research community is calling on you and your government to make Canada the 20th country in the world to take a firm global stand against weaponizing AI.

Lethal autonomous weapons systems that remove meaningful human control from determining the legitimacy of targets and deploying lethal force sit on the wrong side of a clear moral line.'

'As many AI and robotics corporations — including Australian companies — have recently urged, autonomous weapon systems threaten to become the third revolution in warfare.

'Robots are not taking over,' says head of UN body on autonomous weapons

“Robots are not taking over the world,” the diplomat leading the first official talks on autonomous weapons assured on Friday, seeking to head off criticism over slow progress towards restricting the use of so-called “killer robots”.

The United Nations was wrapping up an initial five days of discussions on weapons systems that can identify and destroy targets without human control, which experts say will soon be battle ready.

Twenty-two countries, mostly those with smaller military budgets and lesser technical knowhow, have called for an outright ban, arguing that automated weapons are by definition illegal as every individual decision to launch a strike must be made by a human.

Most nations now agree on the need for a new “legally binding instrument” controlling the use of killer robots and most “states now accept that some form of human control must be maintained over weapons systems”, a campaign statement said.

“I am actually quite confident that we will ban these weapons … My only concern is whether [countries] have the courage of conviction to do it now, or whether we will have to wait for people to die first.”

Sorry, Banning ‘Killer Robots’ Just Isn’t Practical

Late Sunday, 116 entrepreneurs, including Elon Musk, released a letter to the United Nations warning of the dangerous “Pandora’s Box” presented by weapons that make their own decisions about when to kill.

Moreover, technologies such as robotic aircraft and ground vehicles have proved so useful that armed forces may find giving them more independence—including to kill—irresistible.

recent report on artificial intelligence and war commissioned by the Office of the Director of National Intelligence concluded that the technology is set to massively magnify military power.

Greg Allen, coauthor of the report and now an adjunct fellow at nonpartisan think tank the Center for New American Security, doesn’t expect the US and other countries to be able to stop themselves from building arsenals of weapons that can decide when to fire.

Pentagon spokesperson Roger Cabiness said that the US has declined to endorse a ban on autonomous weapons, noting that the department’s Law of War Manual specifies that autonomy can help forces meet their legal and ethical obligations.

Manufacturer Israeli Aerospace Industries markets the Harpy as a “‘Fire and Forget’ autonomous weapon.” Musk signed an earlier letter in 2015 alongside thousands of AI experts in academia and industry that called for a ban on offensive use of autonomous weapons.

Other regulations short of a ban could try to clear up the murky question of who is held legally accountable when a piece of software makes a bad decision, for example by killing civilians.

Ban on killer robots urgently needed, say scientists

The short, disturbing film is the latest attempt by campaigners and concerned scientists to highlight the dangers of developing autonomous weapons that can find, track and fire on targets without human supervision.

The manufacture and use of autonomous weapons, such as drones, tanks and automated machine guns, would be devastating for human security and freedom, and the window to halt their development is closing fast, Russell warned.

While military drones have long been flown remotely for surveillance and attacks, autonomous weapons armed with explosives and target recognition systems are now within reach and could locate and strike without deferring to a human controller.

Because AI-powered machines are relatively cheap to manufacture, critics fear that autonomous weapons could be mass produced and fall into the hands of rogue nations or terrorists who could use them to suppress populations and wreak havoc, as the movie portrays.

The open letter, signed by Tesla’s chief executive, Elon Musk, and Mustafa Suleyman, the founder of Alphabet’s Deep Mind AI unit, warned that an urgent ban was needed to prevent a “third revolution in warfare”, after gunpowder and nuclear arms.

“There is an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions.

It will only take one major war to unleash these new weapons with tragic humanitarian consequences and destabilisation of global security.” Criminals and activists have long relied on masks and disguises to hide their identities, but new computer vision techniques can essentially see through them.

“The UK is not developing lethal autonomous weapons systems, and the operation of weapons systems by the UK armed forces will always be under human oversight and control,” a Foreign Office spokesperson said at the time.

UN panel to debate 'killer robots' and other AI weapons

A United Nations panel agreed Friday to consider guidelines and potential limitations for military uses of artificial intelligence amid concerns from human rights groups and other leaders that so-called “killer robots”

a 37-year old agreement that has set limits on the use of arms and explosives like mines, blinding laser weapons and booby traps over the years.

officials say in theory, fully autonomous, computer-controlled weapons don't exist yet but defining exactly what killer robots are and how much human interaction is involved was a key focus of the meeting.

The concept alone stirs the imagination and fears, as dramatized in Hollywood futuristic or science-fiction films that have depicted uncontrolled robots deciding on their own about firing weapons and killing people.

The United States, in comments presented, said autonomous weapons could help improve guidance of missiles and bombs against military targets, thereby "reducing the likelihood of inadvertently striking civilians."

Some top academics like Stephen Hawking, technology experts and human rights groups have warned about the threats posed by artificial intelligence, amid concerns that it might one day control such systems —

Third UN meeting on killer robots, April 2016

We shot this 3:22 film at the third Convention on Conventional Weapons or "CCW" meeting on lethal autonomous weapons systems at the United Nations in ...

MIT AGI: Autonomous Weapons Systems Policy (Richard Moyes)

This is a talk by Richard Moyes for course 6.S099: Artificial General Intelligence. He is the Co-Founder and Managing Director of Article 36, which is a UK-based ...

Slaughterbots. UN panel meets to define ‘killer robot’ threat (VIDEO)

'Robots are not taking over the world': UN panel meets to define 'killer robot' threat (VIDEO). A UN panel met this week to discuss 'killer robots' amid concerns ...

Regulating Autonomous Weapons Systems

Denise Howell and Emory Roane talk to Rebecca Crootof about killer military robots, liability and accountability. For the full episode, visit

John Yoo on Killer Robots, Space Weapons and The Future of Warfare | Close Encounters Ep. 1

In the first episode of Close Encounters, John Yoo and Ben Weingarten discuss Yoo's new book 'Striking Power: How Cyber, Robots, and Space Weapons ...

Should We Ban Autonomous Weapons Systems?

Elizabeth Quintana, Senior Research Fellow, RUSI, assesses calls made by human rights groups to ban the use of autonomous weapons systems.

AI leaders Musk, Tegmark, and DeepMind call for autonomous weapons systems ban

AI leaders Musk, Tegmark, and DeepMind call for autonomous weapons systems ban Prominent artificial intelligence thought leaders, including SpaceX and ...

Ban Killer Robots

Follow me on twitter @ianrkerr Ian Kerr, Canada Research Chair in Ethics, Law, and Technology at the University of Ottawa was Interviewed on Canada AM ...

UN to debate 'killer robot' ban next year

UN to debate 'killer robot' ban next year as experts warn time is running out to stop AI weapons Countries around the world have now agreed to begin the formal ...

Killer Robots in War and Civil Society - Noel Sharkey #TOA15

Noel Sharkey, Professor Emeritus of the University of Sheffield, considers the technical, ethical, legal challenges for the use of Autonomous Weapons Systems ...