AI News, State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

State of AI: Artificial Intelligence, the Military and Increasingly Autonomous Weapons

As artificial intelligence works its way into industries like healthcare and finance, governments around the world are increasingly investing in another of its applications: autonomous weapons systems. Many are already developing programs and technologies that they hope will give them an edge over their adversaries, creating mounting pressure for others to follow suite.

The UK believes that an “autonomous system is capable of understanding higher level intent and direction.” It suggested that autonomy “confers significant advantages and has existed in weapons systems for decades” and that “evolving human/machine interfaces will allow us to carry out military functions with greater precision and efficiency,” though it added that “the application of lethal force must be directed by a human, and that a human will always be accountable for the decision.” The UK stated that “the current lack of consensus on key themes counts against any legal prohibition,” and that it “would not have any practical

France did propose a political declaration that would reaffirm fundamental principles and “would underline the need to maintain human control over the ultimate decision of the use of lethal force.” In 2018, Israel stated that the “development of rigid standards or imposing prohibitions to something that is so speculative at this early stage, would be imprudent and may yield an uninformed, misguided result.” Israel underlined that “[w]e should also be aware of the military and humanitarian advantages.” In 2015, South Korea stated that “the discussions on LAWS should not be carried out in a way that can hamper research and development of robotic technology for civilian use,” but that it is “wary of fully autonomous weapons systems that remove meaningful human control from the operation loop, due to the risk of malfunctioning, potential accountability gap and ethical concerns.”

Ethics of artificial intelligence

The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings.

divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs).

The term 'robot ethics' (sometimes 'roboethics') refers to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1]

It has been suggested that robot rights, such as a right to exist and perform its own mission, could be linked to robot duty to serve human, by analogy with linking human rights to human duties before society.[3]

Pamela McCorduck counters that, speaking for women and minorities 'I'd rather take my chances with an impartial computer,' pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[14]

However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines: Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.[15]

The new recommendations focus on four main areas: humans and society at large, the private sector, the public sector, and research and academia.

In a highly influential branch of AI known as 'natural language processing,' problems can arise from the 'text corpus'—the source material the algorithm uses to learn about the relationships between different words.[33]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.

'If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow', says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[55]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios 'seem potentially as important as the risks related to loss of control', but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: 'this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them'.[56]

To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[61]

In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[65]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions.

They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons.

In a paper on the acquisition of moral values by robots, Nayef Al-Rodhan mentions the case of neuromorphic chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[73]

Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit - or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.

Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation.

Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g.

while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal 'hackers'.[66]

Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deepfakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don’t require a human controller.[76]

Many researchers have argued that, by way of an 'intelligence explosion' sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[77] In

However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that super-intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[79]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not 'common sense'.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit partnership to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence.

They stated: 'This partnership on AI will conduct research, organize discussions, provide thought leadership, consult with relevant third parties, respond to questions from the public and media, and create educational material that advance the understanding of AI technologies including machine perception, learning, and automated reasoning.'[81]

The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies.

This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them.

Unmanned combat aerial vehicle

An unmanned combat aerial vehicle (UCAV), also known as a combat drone or simply a drone, is an unmanned aerial vehicle (UAV) that usually carries aircraft ordnance such as missiles and is used for drone strikes.[1][2]

As the operator runs the vehicle from a remote terminal, equipment necessary for a human pilot are not needed, resulting in a lower weight and a smaller size than a manned aircraft.

The images and radar decoying provided by these UAVs helped Israel to completely neutralize the Syrian air defenses in Operation Mole Cricket 19 at the start of the 1982 Lebanon War, resulting in no pilots downed.[12]

BAE describes Taranis's role in this context as following: 'This £124m four year programme is part of the UK Government's Strategic Unmanned Air Vehicle Experiment (SUAVE) and will result in a UCAV demonstrator with fully integrated autonomous systems and low observable features.'

It will be stealthy, fast, and able to deploy a range of munitions over a number of targets, as well as being capable of defending itself against manned and other unmanned enemy aircraft.

The program would have used stealth technologies and allowed UCAVs to be armed with precision-guided weapons such as Joint Direct Attack Munition (JDAM) or precision miniature munitions, such as the Small-Diameter Bomb, which are used to suppress enemy air defenses.

In a New Year 2011 editorial titled 'China's Naval Ambitions', The New York Times reported that '[t]he Pentagon must accelerate efforts to make American naval forces in Asia less vulnerable to Chinese missile threats by giving them the means to project their deterrent power from further offshore.

In March 2009, The Guardian reported allegations that Israeli UAVs armed with missiles killed 48 Palestinian civilians in the Gaza Strip, including two small children in a field and a group of women and girls in an otherwise empty street.[29]

In June, Human Rights Watch investigated six UAV attacks that were reported to have resulted in civilian casualties and alleged that Israeli forces either failed to take all feasible precautions to verify that the targets were combatants or failed to distinguish between combatants and civilians.[30][31][32]

In October 2013, the Pakistani government revealed that since 2008 317 drone strikes had killed 2,160 Islamic militants and 67 civilians – far less than previous government and independent organization calculations.[43]

Unlike bomber pilots, moreover, drone operators linger long after the explosives strike and see its effects on human bodies in stark detail.

On 28 October 2009, United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions, Philip Alston, presented a report to the Third Committee (social, humanitarian and cultural) of the General Assembly arguing that the use of unmanned combat air vehicles for targeted killings should be regarded as a breach of international law unless the United States can demonstrate appropriate precautions and accountability mechanisms are in place.[52]

In June 2015 forty-five former US military personnel issued a joint appeal to pilots of aerial drones operating in Afghanistan, Iraq, Syria, Pakistan and elsewhere urging them to refuse to fly and indicated that their missions 'profoundly violate domestic and international laws.'

Keith Shurtleff, an army chaplain at Fort Jackson, South Carolina, worries 'that as war becomes safer and easier, as soldiers are removed from the horrors of war and see the enemy not as humans but as blips on a screen, there is very real danger of losing the deterrent that such horrors provide'.[53]

He found that '92 percent of the population sample he examined was found to be suffering from post-traumatic stress disorder – with children being the demographic most significantly affected.'[54]

Psychologists in Gaza, meanwhile, talk of a whole generation of Gazan children suffering deep psychological trauma because of the continual exposure to the buzzing sounds of drones high above, machines that can spit lethal violence upon them and their families at any moment.[citation needed]

Writer Mark Bowden has disputed this viewpoint saying in his The Atlantic article, 'But flying a drone, [the pilot] sees the carnage close-up, in real time—the blood and severed body parts, the arrival of emergency responders, the anguish of friends and family.

This assessment is corroborated by a sensor operator’s account: .mw-parser-output .templatequote{overflow:hidden;margin:1em 0;padding:0 40px}.mw-parser-output .templatequote .templatequotecite{line-height:1.5em;text-align:left;padding-left:1.6em;margin-top:0} The smoke clears, and there’s pieces of the two guys around the crater.

According to Mark Gubrud, claims that drones can be hacked are overblown and misleading and moreover, drones are more likely to be hacked if they're autonomous, because otherwise the human operator would take control: 'Giving weapon systems autonomous capabilities is a good way to lose control of them, either due to a programming error, unanticipated circumstances, malfunction, or hack and then not be able to regain control short of blowing them up, hopefully before they've blown up too many other things and people.'[61]

There is an ongoing debate as to whether the attribution of moral responsibility can be apportioned appropriately under existing international humanitarian law, which is based on four principles: military necessity, distinction between military and civilian objects, prohibition of unnecessary suffering, and proportionality.[63]

In 2013 a Fairleigh Dickinson University poll asked registered voters whether they 'approve or disapprove of the U.S. Military using drones to carry out attacks abroad on people and other targets deemed a threat to the U.S.?' The results showed that three in every four (75%) of voters approved of the U.S. Military using drones to carry out attacks, while (13%) disapproved.[64]

In March 2013, DARPA began efforts to develop a fleet of small naval vessels capable of launching and retrieving combat drones without the need for large and expensive aircraft carriers.[69]

In November 2014, the Pentagon made an open request for ideas on how to build an airborne aircraft carrier that can launch and retrieve drones using existing military aircraft such as the B-1, B-52 or C-130.[71]

Are You Ready for Weapons That Call Their Own Shots?

As the ability of systems to act autonomously increases, those who study the dangers of such weapons, including the United Nations’ Group of Governmental Experts, fear that military planners may be tempted to eliminate human controls altogether.

The proposed ban competes with the growing acceptance of this technology, with at least 30 countries having automated air and missile defense systems that can identify approaching threats and attack them on their own, unless a human supervisor stops the response.

The speed with which the technology is advancing raises fears of an autonomous weapons arms race with China and Russia, making it more urgent that nations work together to establish controls so humans never completely surrender life and death choices in combat to machines.