AI News, The Case for Ethical Autonomy in Unmanned Systems
- On 23. april 2019
- By Read More
Although some early efforts have been undertaken in this direction, most notably, in attempting to prohibit the deployment of fully autonomous weapons systems, far more work is needed to gauge the impacts of these technologies and to forge new or revised control mechanisms as deemed appropriate.
“We are in the midst of an ever accelerating and expanding global revolution in [AI] and machine learning, with enormous implications for future economic and military competitiveness,” declared former U.S. Deputy Secretary of Defense Robert Work, a prominent advocate for Pentagon utilization of the new technologies.1 The Department of Defense is spending billions of dollars on AI, robotics, and other cutting-edge technologies, contending that the United States must maintain leadership in the development and utilization of those technologies lest its rivals use them to secure a future military advantage.
These include, for example, unmanned aerial vehicles (UAVs) and unmanned surface and subsurface naval vessels capable of being assembled in swarms, or “wolfpacks,” to locate enemy assets such as tanks, missile launchers, submarines and, if communications are lost with their human operators, decide to strike them on their own.
Even more worrisome, some of the weapons now in development, such as unmanned anti-submarine wolfpacks and the TBG system, could theoretically endanger the current equilibrium in nuclear relations among the major powers, which rests on the threat of assured retaliation by invulnerable second-strike forces, by opening or seeming to open various first-strike options.
In the future, AI-invested machines may be empowered to determine if a nuclear attack is underway and, if so, initiate a retaliatory strike.4 In this sense, AI is an “omni-use” technology, with multiple implications for war-fighting and arms control.5 Many analysts believe that AI will revolutionize warfare by allowing military commanders to bolster or, in some cases, replace their personnel with a wide variety of “smart” machines.
This could provide an advantage on the battlefield, where rapid and informed action could prove the key to success, but also raises numerous concerns, especially regarding nuclear “crisis stability.” Analysts worry that machines will accelerate the pace of fighting beyond human comprehension and possibly take actions that result in the unintended escalation of hostilities, even leading to use of nuclear weapons.
“Even if everything functioned properly, policymakers could nevertheless effectively lose the ability to control escalation as the speed of action on the battlefield begins to eclipse their speed of decision-making,” writes Paul Scharre, who is director of the technology and national security program at the Center for a New American Security.6 As AI-equipped machines assume an ever-growing number and range of military functions, policymakers will have to determine what safeguards are needed to prevent unintended, possibly catastrophic consequences of the sort suggested by Scharre and many others.
Many other such munitions are now in development, including undersea drones intended for anti-submarine warfare and entire fleets of UAVs designed for use in “swarms,” or flocks of armed drones that twist and turn above the battlefield in coordinated maneuvers that are difficult to follow.8 The deployment of fully autonomous weapons systems poses numerous challenges to international security and arms control, beginning with a potentially insuperable threat to the laws of war and international humanitarian law.
Opponents of lethal autonomous weapons systems argue that only humans possess the necessary judgment to make such fine distinctions in the heat of battle and that machines will never be made intelligent enough to do so and thus should be banned from deployment.9 At this point, some 25 countries have endorsed steps to enact such a ban in the form of a protocol to the Convention on Certain Conventional Weapons (CCW).
Several other nations, including the United States and Russia, oppose a ban on lethal autonomous weapons systems, saying they can be made compliant with international humanitarian law.10 Looking further into the future, autonomous weapons systems could pose a potential threat to nuclear stability by investing their owners with a capacity to detect, track, and destroy enemy submarines and mobile missile launchers.
Today’s stability, which can be seen as an uneasy nuclear balance of terror, rests on the belief that each major power possesses at least some devastating second-strike, or retaliatory, capability, whether mobile launchers for intercontinental ballistic missiles (ICBMs), submarine-launched ballistic missiles (SLBMs), or both, that are immune to real-time detection and safe from a first strike.
Such an environment would erode the underlying logic of today’s strategic nuclear arms control measures, that is, the preservation of deterrence and stability with ever-diminishing numbers of warheads and launchers, and would require new or revised approaches to war prevention and disarmament.11 Hypersonic Weapons Proposed hypersonic weapons, which can travel at a speed of more than five time the speed of sound, or more than 5,000 kilometers per hour, generally fall into two categories: hypersonic glide vehicles and hypersonic cruise missiles, either of which could be armed with nuclear or conventional warheads.
for the full-scale development of a hypersonic air-launched cruise missile, tentatively called the Hypersonic Conventional Strike Weapon.12 Russia, for its part, is developing a hypersonic glide vehicle it calls the Avangard, which it claims will be ready for deployment by the end of 2019, and China in August announced a successful test of the Starry Sky-2 hypersonic glide vehicle described as capable of carrying a nuclear weapon.13 Whether armed with conventional or nuclear warheads, hypersonic weapons pose a variety of challenges to international stability and arms control.
Anti-missile systems that may work against existing threats might not be able to track and engage hypersonic vehicles, potentially allowing an aggressor to contemplate first-strike disarming attacks on nuclear or conventional forces while impelling vulnerable defenders to adopt a launch-on-warning policy.14 Some analysts warn that the mere acquisition of such weapons could “increase the expectation of a disarming attack.” Such expectations “encourage the threatened nations to take such actions as devolution of command-and-control of strategic forces, wider dispersion of such forces, a launch-on-warning posture, or a policy of preemption during a crisis.” In short, “hypersonic threats encourage hair-trigger tactics that would increase crisis instability.”15 The development of hypersonic weaponry poses a significant threat to the core principle of assured retaliation, on which today’s nuclear strategies and arms control measures largely rest.
Moreover, in the event of a crisis or approaching hostilities, cyberattacks could be launched on an adversary’s early-warning, communications, and command and control systems, significantly impairing its response capabilities.17 For all these reasons, cybersecurity, or the protection of cyberspace from malicious attack, has become a major national security priority.18 Cybersecurity, as perceived by U.S. leaders, can take two forms: defensive action aimed at protecting one’s own information infrastructure against attack;
Although battles in this domain are said to fall below the threshold of armed combat (so long, of course, as no one is killed as a result), it is not difficult to conceive of skirmishes in cyberspace that erupt into violent conflict, for example if cyberattacks result in the collapse of critical infrastructure, such as the electric grid or the banking system.
A group of governmental experts was convened by the UN General Assembly to investigate the adoption of norms and rules for international behavior in cyberspace, but failed to reach agreement on measures that would satisfy all major powers.20 More importantly, it is essential to consider how combat in cyberspace might spill over into the physical world, triggering armed combat and possibly hastening the pace of escalation.
The United States, it affirmed, would only consider using nuclear weapons in “extreme circumstances,” which could include attacks “on U.S. or allied nuclear forces, their command and control, or warning and attack assessment capabilities.”21 The policy of other states in this regard is not so clearly stated, but similar protocols undoubtedly exist.
Some analysts have suggested that the Missile Technology Control Regime could be used as a model for a mechanism intended to prevent the proliferation of hypersonic weapons technology.23 Finally, as the above discussion suggests, it will be necessary to devise entirely new approaches to arms control that are designed to overcome dangers of an unprecedented sort.
- On 14. april 2021
2018 Isaac Asimov Memorial Debate: Artificial Intelligence
Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...
Amir Husain: "The Sentient Machine: The Coming Age of Artificial Intelligence" | Talks at Google
The Sentient Machine addresses broad existential questions surrounding the coming of AI: Why are we valuable? What can we create in this world? How are we ...
Allen School Distinguished Lecture: Manuela Veloso (J.P. Morgan/CMU)
Towards a Lasting Human-AI Interaction Abstract: Artificial intelligence, including extensive data processing, decision making and execution, and learning from ...
Humans Need Not Apply
Support Grey making videos: ## Robots, Etc: Terex Port automation: ..
MIT News at Noon with Missy Cummings
Missy Cummings, associate professor of aeronautics and astronautics and engineering systems at MIT, delivers her "News at Noon" talk at the MIT Museum.
Autonomous Intersection Management: Traffic Control for the Future
Autonomous Intersection Management (AIM) is a new intersection control protocol that exploits autonomous vehicles' extraordinary capabilities of control, ...
The Hugh Thompson Show: Artificial Intelligence
Hugh Thompson, RSA Conference Program Chair Dr. Dawn Song, Professor of Computer Science, UC Berkeley, MacArthur Fellow, and Serial Entrepreneur Dr.
Annie Jacobsen: "The Pentagon's Brain: An Uncensored History of DARPA" | Talks at Google
Journalist Annie Jacobsen visited Google's office in Cambridge, MA to discuss her book "The Pentagon's Brain: An Uncensored History of DARPA, America's ...