AI News, Researchers question if banning of 'killer robots' actually will stop robots from killing

Researchers question if banning of 'killer robots' actually will stop robots from killing

Killer robots are at the center of classic stories told in films such as 'The Terminator' and the original Star Trek television series' 'The Doomsday Machine,' yet the idea of fully autonomous weapons acting independently of any human agency is not the exclusive license of science fiction writers.

'We have to deconstruct the term 'killer robot' into smaller cultural techniques,' says Tero Karppi, assistant professor of media study, whose paper with Marc Böhlen, UB professor of media study, and Yvette Granta, a graduate student at the university, appears in the International Journal of Cultural Studies.

'We need to go back and look at the history of machine learning, pattern recognition and predictive modeling, and how these things are conceived,' says Karppi, an expert in critical platform and software studies whose interests include automation, artificial intelligence and how these systems fail.

In media theory, however, the cultural-techniques approach is interested in various working parts and multiple evolutionary chains of thought, technology, imagination and knowledge production, and how these practices turn into actual systems, products and concepts.

The Pentagon allocated $18 billion of its latest budget to develop systems and technologies that could form the basis of fully autonomous weapons, instruments that independently seek, identify and attack enemy combatants or targets, according to The New York Times.

Killer Robots as cultural techniques

From reading the materials of the Campaign to Stop Killer Robots and focusing on current robotic research in the military context we argue that, instead of demonizing Killer Robots as such, we need to understand the tools, processes and operating procedures that create, support and validate these objects.

The framework of cultural techniques help us to analyze how autonomous technologies draw distinctions between life and death, human and machine, culture and technology, and what it means to be in control of these systems in the 21st century.

Killer Robots Won’t Go to War If Global Movement Has Its Way

Human Rights Watch is one of more than 60 nongovernmental organizations (NGOs) that have coalesced around the campaign, which launched in April 2013 with the single-minded goal to “preemptively ban the development, production and use of fully autonomous weapons.” The coalition includes experts in artificial intelligence, human rights groups, former diplomats and even a group of Nobel Peace Prize Laureates led by Jody Williams (known for her work to ban land mines), from about two dozen countries in what Wareham calls a “truly global campaign” to stop what have been dubbed “lethal autonomous weapons systems” or LAWS.

It is spending billions of dollars to develop what it calls autonomous and semi-autonomous weapons and to build an arsenal stocked with the kind of weaponry that until now has existed only in Hollywood movies and science fiction, raising alarm among scientists and activists concerned by the implications of a robot arms race.” The concern by groups that are part of Campaign to Stop Killer Robots isn’t over fear of a dystopian Terminator or Matrix world, but the removal of human control, judgment and conscience from the theater of war.

In part, it says, “Autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The issue has quickly—in the relative world of government bureaucracy—moved to the forefront of international discourse, according to Wareham.

“While killer robots operate on a global scale, on the public scale we have AI that works in the context of finance, healthcare and social institutions, and on the private scale we have social bots operating on websites, virtual assistants at home and in our smartphones, algorithms recommending stuff on social media platforms.

In the paper we try to deconstruct what killer robots are into smaller elements and techniques.” For Wareham, whatever ambiguity might surround terms like “killer robot” or “meaningful human control,” which the NGOs have pushed as a measure of where to draw the line in the computer code, is secondary to global action on this issue sooner rather than later.

University of Buffalo researchers say autonomous weapons 'have and will become possible'

Researchers have warned it is already too late to stop killer robots - and say banning them would be little more than a temporary solution.

University at Buffalo researchers claim 'society is entering into a situation where systems like these have and will become possible.' Elon Musk and Professor Stephen Hawking have both warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity, and could herald dangers like powerful autonomous weapons.

'We have to deconstruct the term 'killer robot' into smaller cultural techniques,' says Tero Karppi, assistant professor of media study, whose paper with Marc Böhlen, UB professor of media study, and Yvette Granta, a graduate student at the university, appears in the International Journal of Cultural Studies.

'We need to go back and look at the history of machine learning, pattern recognition and predictive modeling, and how these things are conceived,' says Karppi, an expert in critical platform and software studies whose interests include automation, artificial intelligence and how these systems fail.  'What are the principles and ideologies of building an automated system?

'Previously humans have had the agency on the battlefield to pull the trigger, but what happens when this agency is given to a robot and because of its complexity we can't even trace why particular decisions are made in particular situations?' The team say the ethics programmed into the machines are key.  'Consider how both software and ethical systems operate on certain rules,' says Karppi.

'The distinctions between combatant and non-combatant, human and machine, life and death are not drawn by a robot,' write the authors.  'While it may be the robot that pulls the trigger, the actual operation of pulling is a consequence of a vast chain of operations, processes and calculations.' Karppi says it's necessary to unpack two different elements in the case of killer robots.

Killer Robots and Cultural Techniques | Tero Karppi | #CIMStreams

What's on your mind?” This is the question to which every Facebook status update now responds. Millions of users sharing their thoughts in one giant ...

Can a Machine Think (Anything New)? | M. Beatrice Fazi | #CIMStreams

What's on your mind?” This is the question to which every Facebook status update now responds. Millions of users sharing their thoughts in one giant ...