AI News, Killer Robots Using AI Could Transform Warfare. And China Might Hate That.
Killer Robots Using AI Could Transform Warfare. And China Might Hate That.
Recommended: How China Plans to Win a War Against the U.S. Navy Recommended: How the Air Force Would Destroy North Korea Recommended: 10 Reasons No Nation Wants to Fight Israel “The juxtaposition of these announcements illustrates China’s apparent diplomatic commitment to limit the use of fully autonomous lethal weapons systems' is unlikely to stop Beijing from building its own,” writes Kania, a fellow at the Center for New American Security, in a post on the Lawfare blog .
“By such a standard, a weapons system that operates with a high degree of autonomy but involves even limited human involvement, with the capability for distinction between legitimate and illegitimate targets, would not technically be a LAWS, nor would a system with a failsafe to allow for shutdown in case of malfunction,” Kania writes.
Interestingly, this definition is much narrower than than the Chinese military's own published definition of armed autonomous robots, suggesting “there may be daylight between China’s diplomatic efforts on autonomous weapons and the military’s approach,” writes Kania.
Kania suggests that China may have a cynical motive in developing killer robots while publicly indicating support for an international ban: “It is worth considering whether China’s objective may be to exert pressure on the U.S. and other militaries whose democratic societies are more sensitive to public opinion on these issues.” Yet the most interesting contradiction between China and killer robots is existential: Can a centralized political and military system accommodate robots that think for themselves?
China’s Strategic Ambiguity and Shifting Approach to Lethal Autonomous Weapons Systems
On April 13, China’s delegation to United Nations Group of Governmental Experts on lethal autonomous weapons systems announced the “desire to negotiate and conclude” a new protocol for the Convention on Certain Conventional Weapons “to ban the use of fully autonomous lethal weapons systems.” According to the aptly named Campaign to Stop Killer Robots, the delegation “stressed that [the ban] is limited to use only.” The same day, the Chinese air force released details on an upcoming challenge intended to evaluate advances in fully autonomous swarms of drones, which will also explore new concepts for future intelligent-swarm combat.
China’s involvement is consistent with the country’s stated commitment under its 2017 artificial intelligence development plan, which calls for China to “strengthen the study of major international common problems” and “deepen international cooperation on AI laws and regulations.” In historical perspective, China’s integration into international security institutions shows at least mixed success, as post-Mao China has proven willing in some cases to undertake “self-constraining commitments to arms control and disarmament treaties,” as Iain Johnston’s research has demonstrated.
The military’s notion of legal warfare focuses on what it calls seizing “legal principle superiority” or delegitimizing an adversary with “restriction through law.” In line with this approach, China might be strategically ambiguous about the international legal considerations to allow itself greater flexibility to develop lethal autonomous weapons capabilities while maintaining rhetorical commitment to the position of those seeking a ban—as it does in its latest position paper.
(The paper does articulate concern for the capability of LAWS in “effectively distinguishing between soldiers and civilians,” calling on “all countries to exercise precaution, and to refrain, in particular, from any indiscriminate use against civilians.”) It is worth considering whether China’s objective may be to exert pressure on the U.S. and other militaries whose democratic societies are more sensitive to public opinion on these issues.
A chatbot in China was taken offline after its answer to the question “Do you love the Communist Party?” was simply “No.”) China’s position paper highlights human-machine interaction as “conducive to the prevention of indiscriminate killing and maiming … caused by breakaway from human control.” The military appears to have fewer viscerally negative reactions against the notion of having a human “on” rather than “in” the loop (i.e., in a role that is not directly in control but rather supervisory), but assured controllability is likely to remain a priority.
Unsurprisingly, China’s position paper emphasizes the importance of artificial intelligence to development and argues that “there should not be any pre-set premises or prejudged outcome which may impede the development of AI technology.” At the same time, the boundaries between military and civilian applications of AI technology are blurred—especially by China’s national strategy of “civil-military fusion.” China’s emergence as an artificial intelligence powerhouse may enable its diplomatic leadership on these issues, for better and worse, while enhancing its future military power.
Artificial intelligence arms race
An artificial intelligence arms race is a competition between two or more states to have its military forces equipped with the best 'artificial intelligence' (AI).
In May 2017, the CEO of Russia's Kronstadt Group, a defense contractor, stated that 'there already exist completely autonomous AI operation systems that provide the means for UAV clusters, when they fulfill missions autonomously, sharing tasks between them, and interact', and that it is inevitable that 'swarms of drones' will one day fly over combat zones.
Russia has been testing several autonomous and semi-autonomous combat systems, such as Kalashnikov's 'neural net' combat module, with a machine gun, a camera, and an AI that its makers claim can make its own targeting judgements without human intervention.
As such, the (Chinese military) intends to achieve an advantage through changing paradigms in warfare with military innovation, thus seizing the 'commanding heights'...of future military competition'.
China published a position paper in 2016 questioning the adequacy of existing international law to address the eventuality of fully autonomous weapons, becoming the first permanent member of the U.N.
According to data science and analytics firm Govini, The U.S. Department of Defense increased investment in artificial intelligence, big data and cloud computing from $5.6 billion in 2011 to $7.4 billion in 2016.
The U.S. has many military AI combat programs, such as the Sea Hunter autonomous warship, which is designed to operate for extended periods at sea without a single crew member, and to even guide itself in and out of port.
In 2015, the UK government opposed a ban on lethal autonomous weapons, stating that 'international humanitarian law already provides sufficient regulation for this area', but that all weapons employed by UK armed forces would be 'under human oversight and control'.
Israel's Harpy anti-radar 'fire and forget' drone is designed to be launched by ground troops, and autonomously fly over an area to find and destroy radar that fits pre-determined criteria.
Our technology therefore plugs the gaps in human capability', and they want to 'get to a place where our software can discern whether a target is friend, foe, civilian or military'.
As early as 2007, scholars such as AI professor Noel Sharkey have warned of 'an emerging arms race among the hi-tech nations to develop autonomous submarines, fighter jets, battleships and tanks that can find their own targets and apply violent force without the involvement of meaningful human decisions'.
The report further argues that 'Preventing expanded military use of AI is likely impossible' and that 'the more modest goal of safe and effective technology management must be pursued', such as banning the attaching of an AI dead man's switch to a nuclear arsenal.
2015 open letter calling for the ban of lethal automated weapons systems has been signed by tens of thousands of citizens, including scholars such as physicist Stephen Hawking, Tesla magnate Elon Musk, and Apple's Steve Wozniak.
These Chinese military innovations threaten U.S. superiority, experts say
BEIJING — The Chinese New Year began with the traditional lighting of firecrackers on Friday, but the country's military has been working on incendiaries on an entirely different scale.
Still, it is clear that significant milestones have been reached by a country that, alongside Russia, is categorized in Trump's national security strategy as a 'revisionist power' — a nation seeking to redefine the world along values contrary to America's.
Instead of explosives, railguns use powerful electromagnets to fire projectiles as far as 100 nautical miles (115 miles) at seven times the speed of sound.
China is now working on a third carrier, an 80,000-ton vessel dubbed Type 002, that will be able to host more than 40 aircraft and is expected to feature an advanced catapult that can launch heavier jets more quickly.
Some local experts predict China's strategy of regional strength means it will eventually need four to five carrier battle groups, smaller than the U.S. global strategy that requires 10 to 11 groups.
Boosting the Chinese Air Force further was the recent successful flight of the world’s largest amphibious aircraft, the AG600 Kunlong, which was designed for maritime rescue but, with a range of 2,800 miles, can play a potentially important role in the South China Sea.
This medium-range weapon differs from a regular ballistic missile by gliding back to Earth on a slower, flatter trajectory that evades the gaze of radar-enabled U.S. missile defenses.
Last year, China also brought into service its latest generation of intercontinental ballistic missile, the DF-41, which can carry 10 maneuverable warheads and has a range of 7,500 to 9,300 miles.
In the same month, swarm intelligence — the coordinated deployment of autonomous machines — was demonstrated when a state-owned company successfully launched 119 drones that performed formations in the sky.
For the Chinese People's Liberation Army, said Kania of the Center for a New American Security, effective military applications of artificial intelligence will include cyber and electronic warfare as well as 'swarms of drones that might be used to target high-value U.S. weapons platforms, such as aircraft carriers.'
'Slaughterbots': U.S., Russia lead fight to block 'killer robots' ban
“The issues presented by autonomy in weapons systems are complex, and further substantive dialogue is required to develop greater shared understanding of the nature and applications of the technical features and functions that are incorporated into weapons systems, and the ways in which to ensure that appropriate human judgment is exercised over the use of these weapons systems.”
As the battle lines are drawn in Geneva, a wild card in the equation is China, which has said it supports a ban on the use of killer robots but reportedly continues to conduct research in the field.
Some scholars argue that China is taking a deliberately ambivalent position in order to cast itself as a supporter of human rights while behind the scenes is working to stay current in the robot arms race.
“China might be strategically ambiguous about the international legal considerations to allow itself greater flexibility to develop lethal autonomous weapons capabilities while maintaining rhetorical commitment to the position of those seeking a ban,”
Blocking the ban What is becoming increasingly clear is that most advanced, technologically proficient nations want autonomous weapons research to move ahead and argue that an all-out ban would be a mistake.
talks aimed at finding “concrete options for recommendations on how to effectively address the challenges arising from lethal autonomous weapons systems, while neither hampering scientific progress nor the consideration of the beneficial aspects of emerging technologies.”
Russian officials, echoing their counterparts from a host of other nations, said recently that fully automated weapons that can act without human control simply don’t exist right now and that taking action to stop them is premature.
“Momentum is starting to build rapidly for states to start negotiating a new ban treaty and determine what is necessary to retain human control over weapons systems and the use of force,”
Activists also warn that if technological trends continue, “humans may start to fade out of the decision-making loop for certain military actions, perhaps retaining only a limited oversight role or simply setting broad mission parameters.”