AI News, Do We Want Robot Warriors to Decide Who Lives or Dies?

Do We Want Robot Warriors to Decide Who Lives or Dies?

Lately, fears of fiction turning to fact have been stoked by a confluence of developments, including important advances in artificial intelligence and robotics, along with the widespread use of combat drones and ground robots in Iraq and Afghanistan.

But it’s likely, and some say inevitable, that future AI-powered weapons will eventually be able to operate with complete autonomy, leading to a watershed moment in the history of warfare: For the first time, a collection of microchips and software will decide whether a human being lives or dies.

The poles of the debate are represented by those who fear that robotic weapons could start a world war and destroy civilization and others who argue that these weapons are essentially a new class of precision-guided munitions that will reduce, not increase, casualties.

Last year, the debate made news after a group of leading researchers in artificial intelligence called for a ban on “offensive autonomous weapons beyond meaningful human control.” In an open letter presented at a major AI conference, the group argued that these weapons would lead to a “global AI arms race” and be used for “assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.” The letter was signed by more than 20,000 people, including such luminaries as physicist Stephen Hawking and Tesla CEO Elon Musk, who last year donated US $10 million to a Boston-based institute whose mission is “safeguarding life” against the hypothesized emergence of malevolent AIs.

and Toby Walsh from the University of New South Wales, Australia—expanded on their arguments in an online article for IEEE Spectrum, envisioning, in one scenario, the emergence “on the black market of mass quantities of low-cost, antipersonnel microrobots that can be deployed by one person to anonymously kill thousands or millions of people who meet the user’s targeting criteria.” The three added that “autonomous weapons are potentially weapons of mass destruction.

While some nations might not choose to use them for such purposes, other nations and certainly terrorists might find them irresistible.” It’s hard to argue that a new arms race culminating in the creation of intelligent, autonomous, and highly mobile killing machines would well serve humanity’s best interests.

“Itmight be a high-intensity straight-on conflict when there’s no time for humans to be in the loop, because it’s going to play out in a matter of seconds.” The U.S. military has detailed some of its plans for this new kind of war in a road map [pdf] for unmanned systems, but its intentions on weaponizing such systems are vague.

Asked about autonomous weapons, Work insisted that the U.S. military “will not delegate lethal authority to a machine to make a decision.” But when pressed on the issue, he added that if confronted by a “competitor that is more willing to delegate authority to machines than we are...we’ll have to make decisions on how we can best compete.

(In video released after the demonstration, the robot is shown riding an ATV at a speed only slightly faster than a child on a tricycle.) China’s growing robotic arsenal includes numerous attack and reconnaissance drones.

The three countries’ approaches to robotic weapons, introducing increasing automation while emphasizing a continuing role for humans, suggest a major challenge to the banning of fully autonomous weapons: A ban on fully autonomous weapons would not necessarily apply to weapons that are nearly autonomous.

“But that seems like probably the last way that militaries want to employ autonomous weapons.” Much more likely, he adds, will be robotic weapons that target not people but military objects like radars, tanks, ships, submarines, or aircraft.

“Because humans are better at being flexible and adaptable to new situations that maybe we didn’t program for, especially in war when there’s an adversary trying to defeat your systems and trick them and hack them.” It’s not surprising, then, that DoDAAM, the South Korean maker of sentry robots, imposed restrictions on their lethal autonomy.

Arkin argues that autonomous weapons, just like human soldiers, should have to follow the rules of engagement as well as the laws of war, including international humanitarian laws that seek to protect civilians and limit the amount of force and types of weapons that are allowed.

Eventually, Arkin arrived at a set of algorithms, and using computer simulations and very simplified combat scenarios—an unmanned aircraft engaging a group of people in an open field, for example—he was able to test his methodology.

special rapporteur for human rights, wrote an influential report noting that the world’s nations had a rare opportunity to discuss the risks of autonomous weapons before such weapons were already fully developed.

meetings, Heyns says that “if I look back, to some extent I’m encouraged, but if I look forward, then I think we’re going to have a problem unless we start acting much faster.” This coming December, the U.N.’s Convention on Certain Conventional Weapons will hold a five-year review conference, and the topic of lethal autonomous robots will be on the agenda.

Arms Control Today

October 2016 By Frank Sauer Autonomous weapons systems have drawn widespread media attention, particularly since last year’s open letter signed by more than 3,000 artificial intelligence (AI) and robotics researchers warning against an impending “military AI arms race.”1 Since 2013, discussion of such weapons has been climbing the arms control agenda of the United Nations.

They are a topic at the Human Rights Council and the General Assembly First Committee on disarmament and international security, but the main venue of the debate is the Convention on Certain Conventional Weapons (CCW) in Geneva.2 So far, CCW countries have convened for three informal meetings of experts on the topic and in December will decide whether to continue and deepen their deliberations by establishing a group of governmental experts next year.

There arguably is a tacit understanding in the expert community and among diplomats in Geneva that the debate’s main focus is on future, mobile weapons platforms equipped with onboard sensors, computers, and decision-making algorithms with the capability to seek, identify, track, and attack targets autonomously.

They are supposed to allow for greater restraint and also better discrimination between civilians and combatants, resulting in an application of force in strict or stricter accordance with international humanitarian law.5 Problems With Autonomy In light of these anticipated benefits, one might expect militaries to unequivocally welcome the introduction of autonomous weapons systems.

The potential for high-tempo fratricide, much greater than at human intervention speeds, incentivizes militaries to retain humans in the chain of decision-making as a fail-safe mechanism.6  Above and beyond such tactical concerns, these systems threaten to introduce a destabilizing factor at the strategic level.

An uncontrolled escalation from crisis to war is entirely within the realm of possibilities.7  Human decision-making in armed conflict requires complex assessments to ensure a discriminate and proportionate application of military force in accordance with international humanitarian law.

closely related aspect is that it remains unclear who would be legally accountable if civilians were unlawfully injured or killed by autonomous weapons systems, especially because targeting processes in modern militaries are such an immensely complex, strategic, multilevel endeavor.

An artificially intelligent system tasked with autonomous targeting would thus not only need to replace various human specialists, creating what has become known as the “accountability gap” because a machine cannot be court-martialed, it would essentially require a human abdication of political decision-making.9  Leaving military, legal, and political considerations aside moves a more fundamental problem into focus.

From an ethical point of view, it is argued that autonomous weapons systems violate fundamental human values.10 Delegating the decision to kill a human to an algorithm in a machine, which is not responsible for its actions in any meaningful ethical sense, can arguably be understood to be an infringement on basic human dignity, representing what in moral philosophy is known as a malum in se, a wrong in itself.

This peculiar consideration is reflected in the public’s deep concerns in the United States and internationally regarding autonomy in weapons systems.11 In sum, there are many reasons—military, legal, political, ethical—for engaging in preventive arms control measures regarding autonomous weapons systems.

CCW deliberations on cluster munitions failed in 2011 to produce an outcome, leaving the 2008 Convention on Cluster Munitions, created outside CCW and UN auspices, as the sole international instrument to specifically regulate these weapons.  Yet, so far autonomous weapons systems have been the subject of exceptionally dynamic talks and climbed the CCW agenda with unprecedented speed.

On the other hand, the CCW has a fearsome reputation as a place where good ideas go to die a slow death.  The civil society movement pushing for a legally binding prohibition on autonomous weapons systems within the CCW framework is organized and spearheaded by the Campaign to Stop Killer Robots, a coalition of more than 61 groups in 26 countries coordinated by Human Rights Watch.

Thus, the argument that human control over life-and-death decisions must always be in place in a significant or meaningful fashion, as more than just a mindless pushing of a button by a human in response to a machine-processed stream of information.  According to current practice, a human weapons operator must have sufficient information about the target and sufficient control over the weapon and must be able to assess its effects in order to be able to make decisions in accordance with international law.

The first would be a legally binding and preventive multilateral arms control agreement derived by consensus in the CCW and thus involving the major stakeholders, the outcome referenced as “a ban.” Considering the growing number of states-parties calling for a ban and the large number of governments calling for meaningful human control and expressing considerable unease with the idea of autonomous weapons systems, combined with the fact that no government is openly promoting their development, this seems possible.

Implementing autonomy, which mainly comes down to software, in systems drawn from a vibrant global ecosystem of unmanned vehicles in various shapes and sizes is a technical challenge, but doable for state and nonstate actors, particularly because so much of the hardware and software is dual use.

An unchecked autonomous weapons arms race and the diffusion of autonomous killing capabilities to extremist groups would clearly be detrimental to international peace, stability, and security.  This underlines the importance of the current opportunity for putting a comprehensive, verifiable ban in place.

Although this process holds important lessons, for instance regarding the valuable input that epistemic communities and civil society can provide, it also raises vexing questions, particularly if and how arms control will find better ways for tackling issues from a qualitative rather than quantitative angle.  The autonomous weapons systems example points to a future in which dual-use reigns supreme and numbers are of less importance than capabilities, with the weapons systems to be regulated, potentially disposable, 3D-printed units with their intelligence distributed in swarms.

States can use the upcoming CCW review conference in December to go above and beyond the recommendation from the 2016 meeting on lethal autonomous weapons systems and agree to establish an open-ended group of governmental experts with a strong mandate to prepare the basis for new international law, preferably via a ban.

The currently nascent social taboo against machines autonomously making kill decisions meets all the requirements for spawning a “humanitarian security regime.”15  Autonomous weapons systems would not be the first instance when an issue takes an indirect path through comparably softer social international norms and stigmatization to a codified arms control agreement.

  The so-called first offset was the use of nuclear deterrence against the large conventional forces of the Soviet Union starting in the 1950s, and the second offset in the 1970s and 1980s was the fielding of precision munitions and stealth technologies to counter the air and ground forces of adversaries.

These weapons can find a target all by themselves — and researchers are terrified

On July 27, over a thousand artificial intelligence researchers, including Google director of research Peter Norvig and Microsoft managing director Eric Horvitz, co-signed an open letter urging the United Nations (UN) to ban the development and use of autonomous weapons .

The letter, presented at the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires, Argentina, now has 16,000 signatures , according to The Guardian.

But Peter Asaro, the co-founder of the International Committee for Robot Arms Control , told NBC News that South Korea received 'a lot of bad press about having autonomous killer robots on their border.'

The long range anti-ship missile, or LRASM, is currently being developed by Lockheed Martin and recently aced its third flight test .

The Israel Aerospace Industries ' (IAI) Harpy is a 'fire-and-forget' autonomous drone system mounted on a vehicle that can detect, attack and destroy radar emitters , according to the Times of Israel.

The Harpy can 'loiter' in the air for a long period of time as it searches for enemy radar emitters before it fires ' with a very high hit accuracy ,' according to the IAI's website.

Jared Adams, the director of Media Relations at DARPA told Tech Insider in an email that the DOD 'explicitly precludes the use of lethal autonomous systems,' as stated by a 2012 directive .

Pros and Cons of Autonomous Weapons Systems

Thurnher, U.S. Army, adds, “[lethal autonomous robots] have the unique potential to operate at a tempo faster than humans can possibly achieve and to lethally strike even when communications links have been severed.”3 In addition, the long-term savings that could be achieved through fielding an army of military robots have been highlighted.

take out an entire fleet of aircraft, presumably one with human pilots.8 In 2012, a report by the Defense Science Board, in support of the Office of the Under Secretary of Defense for Acquisition, Technology and Logistics, identified “six key areas in which advances in autonomy would have significant benefit to [an] unmanned system: perception, planning, learning, human-robot interaction, natural language understanding, and multiagent coordination.”9 Perception, or perceptual processing, refers to sensors and sensing.

[achieve] a desired state.”11 The process relies on effective processes and “algorithms needed to make decisions about action (provide autonomy) in situations in which humans are not in the environment (e.g., space, the ocean).”12 Then, learning refers to how machines can collect and process large amounts of data into knowledge.

The report asserts that research has shown machines process data into knowledge more effectively than people do.13 It gives the example of machine learning for autonomous navigation in land vehicles and robots.14 Human-robot interaction refers to “how people work or play with robots.”15 Robots are quite different from other computers or tools because they are “physically situated agents,”

Instead, he suggests that because it sparks so much moral outrage among the populations from whom the United States most needs support, robot warfare has serious strategic disadvantages, and it fuels the cycle of perpetual warfare.23 While some support autonomous weapons systems with moral arguments, others base their opposition on moral grounds.

The letter warns, “Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is—practically if not legally—feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”24 The letter also notes that AI has the potential to benefit humanity, but that if a military AI arms race ensues, AI’s reputation could be tarnished, and a public backlash might curtail future benefits of AI.

The report recommended that member states should declare and implement moratoria on the testing, production, transfer, and deployment of lethal autonomous robotics (LARs) until an internationally agreed upon framework for LARs has been established.26 That same year, a group of engineers, AI and robotics experts, and other scientists and researchers from thirty-seven countries issued the “Scientists’

because it violates the Principle of Distinction, considered one of the most important rules of armed conflict—autonomous weapons systems will find it very hard to determine who is a civilian and who is a combatant, which is difficult even for humans.29 Allowing AI to make decisions about targeting will most likely result in civilian casualties and unacceptable collateral damage.

Any weapon or other means of war that makes it impossible to identify responsibility for the casualties it causes does not meet the requirements of jus in bello, and, therefore, should not be employed in war.30 This issue arises because AI-equipped machines make decisions on their own, so it is difficult to determine whether a flawed decision is due to flaws in the program or in the autonomous deliberations of the AI-equipped (so-called smart) machines.

The nature of this problem was highlighted when a driverless car violated the speed limits by moving too slowly on a highway, and it was unclear to whom the ticket should be issued.31 In situations where a human being makes the decision to use force against a target, there is a clear chain of accountability, stretching from whoever actually “pulled the trigger”

Legal scholars Kenneth Anderson and Matthew Waxman, who advocate this approach, argue that regulation will have to emerge along with the technology because they believe that morality will coevolve with technological development.32 Thus, arguments about the irreplaceability of human conscience and moral judgment may have to be revisited.33 In addition, they suggest that as humans become more accustomed to machines performing functions with life-or-death implications or consequences (such as driving cars or performing surgeries), humans will most likely become more comfortable with AI technology’s incorporation into weaponry.

but not in the sense that they can change or abort their mission if they have “moral objections.”36 Lucas thus holds that the primary concern of engineers and designers developing autonomous weapons systems should not be ethics but rather safety and reliability, which means taking due care to address the possible risks of malfunctions, mistakes, or misuse that autonomous weapons systems will present.

Thus, Schmitt grants that some autonomous weapons systems might contravene international law, but “it is categorically not the case that all such systems will do so.”38 Thus, even an autonomous system that is incapable of distinguishing between civilians and combatants should not necessarily be unlawful per se, as autonomous weapons systems could be used in situations where no civilians were present, such as against tank formations in the desert or against warships.

While it is possible to determine what is a chemical weapon and what is not (despite some disagreements at the margin, for example, about law enforcement use of irritant chemical weapons), and to clearly define nuclear arms or land mines, autonomous weapons systems come with very different levels of autonomy.40 A ban on all autonomous weapons would require foregoing many modern weapons already mass produced and deployed.

One definition, used by the Defense Science Board, views autonomy merely as high-end automation: “a capability (or a set of capabilities) that enables a particular action of a system to be automatic or, within programmed boundaries, ‘self-governing.’”41 According to this definition, already existing capabilities, such as autopilot used in aircraft, could qualify as autonomous.

For example, Israel’s Iron Dome system detects incoming rockets, predicts their trajectory, and then sends this information to a human soldier who decides whether to launch an interceptor rocket.46 “Human-on-the-loop weapons [are] robots that can select targets and deliver force under the oversight of a human operator who can override the robots’

For example, the MK 15 Phalanx Close-In Weapons System has been used on Navy ships since the 1980s, and it is capable of detecting, evaluating, tracking, engaging, and using force against antiship missiles and high-speed aircraft threats without any human commands.49 The Center for a New American Security published a white paper that estimated as of 2015 at least thirty countries have deployed or are developing human-supervised systems.50 “Human-out-of-the-loop weapons [are] robots capable of selecting targets and delivering force without any human input or interaction.”51 This kind of autonomous weapons system is the source of much concern about “killing machines.”

Adams warned that, in the future, humans would be reduced to making only initial policy decisions about war, and they would have mere symbolic authority over automated systems.52 In the Human Rights Watch report, Docherty warns, “By eliminating human involvement in the decision to use lethal force in armed conflict, fully autonomous weapons would undermine other, nonlegal protections for civilians.”53 For example, a repressive dictator could deploy emotionless robots to kill and instill fear among a population without having to worry about soldiers who might empathize with their victims (who might be neighbors, acquaintances, or even family members) and then turn against the dictator.

discussed by Thomas Schelling in Arms and Influence, in which one party limits its own options by obligating itself to retaliate, thus making its deterrence more credible.)55 We suggest that nations might be willing to forgo this advantage of fully autonomous arms in order to gain the assurance that once hostilities ceased, they could avoid becoming entangled in new rounds of fighting because some bombers were still running loose and attacking the other side, or because some bombers might malfunction and attack civilian centers.

MICRO DRONES KILLER ARMS ROBOTS - AUTONOMOUS ARTIFICIAL INTELLIGENCE - WARNING !!

Killer drone arms, articial intelligence an increasingly real fiction, Social and Smart Phone Facial Recognition, Smart swarms, Warning ! SUBSCRIBE OUR ...

Automated Machine Gun Targets People from 1.5 miles

Feb 14 - South Korea has developed an automated, turret-based weapon platform capable of locking onto a human target three kilometers away. Tara Cleary ...

5 Robots that Will Take Over the World

Westworld's robots aren't the only ones learning to be more "alive." Just in time for the Westworld season finale, Dark5 examines 5 scary new skills being ...

Phalanx CIWS Close-in Weapon System In Action - US Navy's Deadly Autocannon

Footage of the Phalanx CIWS Close-in Weapon System in various target practicing exercises. The Phalanx Close-In Weapons System (CIWS) was developed as ...

Future weapon used by intelligence(Mini Drone)

AI based drone weapon will be launched when will target anyone by face recognization.

Real-Life Sentry Gun / Aim Bot / Gun Turret / Turret Sentry (video 8 of 18)

- Automated, target tracking, hands-free sentry gun! This video is old! click my name to see newer videos of this system

Prototype Quadrotor with Machine Gun!

CLICK TO TWEET: FPSRussia Shirts: Twitter: FaceBook:

Rules of war (in a nutshell)

Yes, even wars have laws. To find out more, visit ******** Rules of War in a Nutshell - script Since the beginning, humans have resorted to ..

Inside Alien DARPA Lab Human Hybrid & Cyborg Top Secret Experiments

Warning: This video includes actual footage & photos, satellite images & reenactments based on CIA officers testimony Since the fall of Berlin wall, many US ...

Armies of the Future: AI Bots or “Enhanced” Humans?

In this week's edition of Mind Hack, Jeff DeRiso discusses an effort by the Army to train their autonomous combat robots to better recognize targets. He also talks ...