AI News, 90 Best Transhumanism (h+) images

The Ethics of Acquiring Disruptive Military Technologies

In his September 2019 United Nations General Assembly speech, British Prime Minister Boris Johnson warned of a dystopian future of digital authoritarianism, the practical elimination of privacy, and “terrifying limbless chickens,” among other possible horrors.2 Highlighting artificial intelligence, human enhancement, and cyber technologies, Johnson warned that “unintended consequences” of these technologies could have dire and global effects.

Despite artificial intelligence’s (AI) potential for improved targeting to reduce collateral harm, Google, the European Union, and the 2019 winner of the Nobel Peace Prize, among many others, have called for a ban on research on machines that can decide to take a human life.3 A number of researchers have also raised concerns regarding the medical and social side effects of human enhancement technologies.4 While cyber technologies have been around a while, their dual-use nature raises concerns about the disruptive effect that an adversary’s cyber operations can have on civilian life, something that could escalate into a very real war.5 In fact, whereas the previous U.S. administration was criticized for being ineffective regarding cyber operations, the current one is frequently criticized for being too aggressive.6 The confusion that disruptive technologies create suggests that the problem is not so much with the new capabilities themselves as with the norms that should govern their use, and by extension, their acquisition.

Aristotle famously pointed out that if machines could operate autonomously there would be no need for human labor, thus disrupting the social relationships of the time.7 In fact, the trajectory of technology development can largely be described as an effort to reduce human labor requirements, and, especially in the military context, the need for humans to take risk.

Though funded by the Department of Defense, the inventors of the Internet, for example, simply sought a way for researchers to collaborate.8 They did not anticipate the impact this technology would have on industries such as print media, whose profitability has significantly declined since the Internet’s introduction.9 Nor did they fully anticipate the impact it would have on national security as increasing connectivity exposes military systems and information as well as critical civilian infrastructure to attack.10 Defining Technologies For the purposes of this discussion, technology is broadly understood to include physical objects and activities and the practical knowledge about both, i.e., knowledge about the kinds of things one can do with those objects and activities.11 Some technologies embody all three aspects.

A mechanical autopilot can be designed to take passenger comfort into account by limiting how steep it will climb, descend, or turn and thus has more autonomy and ethical sensitivity.12 While this discussion is not intended as a comprehensive survey of disruptive technology, it relies heavily on examples from AI, human enhancements, and cyber technologies to illustrate key points.

For the purposes of this discussion, artificially intelligent systems will refer to military systems that include both lethal autonomous weapons that can select and engage targets without human intervention and decision-support systems that facilitate complex decision-making processes, such as operational and logistics planning.

Human enhancements are any interventions to the body intended to improve a capability above normal human functioning or provide one that did not otherwise exist.13 As such, enhancements will refer to anything from pharmaceuticals to neural implants intended to enable human actors to control systems from a distance.

It does not refer to treatments or other measures intended to restore normal functions or those that do improve or provide new capabilities but that do not involve a medical intervention, such as an exoskeleton, for example, that a soldier would simply put on.14 “Cyber” is a broad term that generally refers to technology that allows for and relies on the networking of computers and other information technology systems.

This network creates an environment typically referred to as “cyberspace.” As the National Institute for Standards and Technology defines it, cyberspace refers to the “interdependent network of information technology infrastructures, and includes the Internet, telecommunications networks, computer systems, and embedded processors and controllers in critical industries.”15 What is extraordinary about cyberspace is how this connectivity evolved into a domain of war, on par with air, land, sea, and space.

He observes that “technological breakthroughs” such as “in metallurgy, explosives, steam turbines, internal combustion engines, radio, radar, and weapons” when applied, for example, to mature platforms like the battleship, certainly and significantly improved its capabilities.

On the other hand, when these breakthroughs were combined with an immature technology, like aircraft, which in the beginning were slow, lightly armed, and limited in range, the combination revolutionized air and naval warfare.18 The effects of convergence, it seems, are difficult to anticipate and, as a consequence, control.

As Ron Adner and Peter Zemsky observe, what makes a technology — or combination of technologies — disruptive and not merely new are the attributes it introduces into a given market, or, for the purposes of this discussion, a community of users within the larger national security enterprise.

Rather, what is common to disruptive technologies is the novelty of the attributes they introduce and how useful those attributes are to at least a subset of the user community.19 To the extent they sufficiently meet user requirements and incorporate attributes a subset of those users find attractive, it could displace the older technology over time, even if it does not perform as well.

Thus, in the early market for hard drives, customers accepted reduced capacity in terms of memory and speed as well as higher costs per megabyte to get “lighter weight, greater ruggedness, and lower power consumption” than previous hard drive options provided.20 Changing how actors compete in effect changes the game, which, in turn, changes the rules.

In fact, Christensen managed a fund that relied on his theory to identify opportunities for investment — within a year, it was liquidated.22 Subsequent analysis has attributed that failure in part to Christensen’s selectiveness regarding cases, with some accusing him of ignoring those that did not fit his theory.

Others account for the predictive inadequacy of the theory by pointing out that other factors beyond those associated with the technology — including chance — can affect a technology’s disruptive effects.23 The claim here is not that the conditions for disruptive effects dictate any particular outcome.

Yet, development of the technologies discussed here empowers smaller actors to develop “small, smart, and cheap” means to challenge larger, state actors — and win.24 This dynamic simultaneously places a great deal of pressure on all actors to keep developing these and other technologies at an increasingly faster rate, creating ever more disruption.

First, in September 2019, Houthi rebels in Yemen claimed to have employed unmanned aerial vehicles and cruise missiles to launch a devastating attack on Saudi oil facilities, leading to an immediate 20 percent increase in global oil prices and prompting the United States to move additional military forces to the Middle East.25 To make matters more complicated, there is evidence that the Houthis were not in fact responsible for the attacks, but that they were launched by Iranian proxies in Iraq.

This use of autonomous technologies enabled the Iranians to obscure responsibility, which in turn constrained the political options the United States and its allies have to effectively respond.26 However, in the national security environment, when the conditions for disruption exist, the resulting potential for game-changing innovation forces state actors to grapple with how best to respond to that change.

They do so in order to motivate fighters, in the words of one Islamic State member, to go to battle “not caring whether you lived or died.”27 It is this ability to enhance fighter capabilities that enabled the Islamic State to fight outnumbered and win against Iraqi, Syrian, and Kurdish forces, especially in 2014, when it rapidly expanded its presence in Iraq and Syria.

Third, in 2015,  Iranian hackers introduced malware in a Turkish power station that created a massive power outage, leaving 40 million people without power, reportedly as a payback for its support for Saudi operations against Houthis.28 While perhaps one of the more dramatic Iranian-sponsored attacks, there have been numerous others: Iran is suspected of conducting a number of directed, denial-of-service attacks as well as other attacks against Saudi Arabia, the United Arab Emirates, Jordan, and the United States.29 None of the technologies in these examples are terribly advanced and most are available commercially in some form.

As Rudi Volti observes, “technological change is often a subversive process that results in the modification or destruction of established social roles, relationships, and values.” 30 Challenging the Norms of Warfighting The question here is not whether such technologies as described above will challenge the norms of warfighting, but how they will.

However, its slow speed and light armor made it vulnerable and ineffective against the large surface warfare ships of the day.31 As a result, it was initially used against unarmed merchant ships, which, even at the time, was in violation of international law.32 What is ironic about the introduction of the submarine is that its disruptive effects were foreseen.

Nonetheless, the attacks against British merchant vessels were so devastating he was forced to resign.33 In fact, unrestricted submarine warfare traumatized Britain such that after the war it tried to build an international consensus to ban submarine warfare altogether.34 When that failed, Britain and the United States backed another effort to prohibit unrestricted submarine warfare and in 1936 signed a “Proces-Verbal” to the 1930 London Naval Treaty, which required naval vessels, whether surface or submarine, to ensure the safety of merchant ship crews and passengers before sinking them.35 Later, to encourage more states to sign onto the ban, that prohibition was modified to permit the sinking of a merchant ship if it was “in a convoy, defended itself, or was a troop transport.”36 Despite this agreement, both Germany and the United States engaged in unrestricted submarine warfare again in World War II.

Nimitz admitted to the court that the United States had largely done the same in the Pacific.37 So while certainly a case of mitigated victor’s justice, this muddled example also illustrates two things: the normative incoherency that arises with the introduction of new technologies as well as the pressure of necessity to override established norms.

Later, to accommodate at least some of the advantages the submarine provided, they modified the norm by assimilating noncombatant merchant seamen into the class of combatants by providing them with some kind of defense.38 In doing so, they accepted that the submarine placed an obligation on them to defend merchant vessels rather than maintain a prohibition against attacking them, at least under certain conditions.

Eventually, however, submarine technology improved to the point it could more effectively compete in more established naval roles and challenge surface warfare ships, which, along with naval aviation and missile technologies, not only helped make the battleship obsolete, it brought the submarine’s use more in line with established norms.

While these developments were welcome, over time they contributed to population increases that strain the economy and lead to overcrowding.41 This dynamic is especially true for disruptive technologies whose attributes often interact with their environment in ways their designers may not have anticipated, but which users find beneficial.

However, this dynamic invites a “give and take” of reasons and interests regarding both which technologies to develop as well as the rules to govern their use that is not unlike John Rawls’ conception of reflective equilibrium, where, in the narrow version, one revises one’s moral beliefs until arriving at a level of coherency where not only are all beliefs compatible, but in some cases, they explain other beliefs.42 While likely a good descriptive account of what happens in the formation of moral beliefs, such a process will not necessarily give an account of what those beliefs should be.

For example, the introduction of long-range weaponry eventually displaced chivalric norms of warfighting, which were really more about personal honor than the kinds of humanitarian concerns that motivated the just war tradition.43 In the military context, norms for warfighting are more broadly captured in what Michael Walzer refers to as the “war convention,” which is “the set of articulated norms, customs, professional codes, legal precepts, religious and philosophical principles, and reciprocal arrangements that shape our judgments of military conduct,” which includes choices regarding how to fight wars and with what means.44 The war convention includes the just war tradition, which evolved to govern when states are permitted to go to war and how they can fight in them.

Walzer’s conception of jus ad bellum demands wars only be fought by a legitimate authority for a just cause and even then only if it can be done so proportionally, with a reasonable chance for success, and only as a last resort.45 When it comes to fighting wars, jus in bello further requires force be used discriminately to avoid harm to noncombatants and in proportion to the value of the military objective.46 These conditions suggest that any technology that makes war more likely, less discriminate, or less proportional is going to be problematic, if not prohibited.

As Rawls observed, people act autonomously when they choose the principles of their action as “the most adequate possible expression” of their nature as “free and equal” rational beings.48 Because people are free and equal in this way, they are entitled to equal treatment by others.

This requirement of fairness, which Rawls saw as synonymous with justice, is reflected in the universality of moral principles: They apply to all, regardless of contingencies such as desire and interest.49 Any discussion of fairness, of course, will require answering the question, “Fairness about what?” For Rawls, it is fairness over the distribution of a broad range of social goods.

As with the concept of reflective equilibrium, it is not necessary to ignore the critiques and limitations of Rawls’ broader political theories in order to accept a commitment to moral and legal universalism that upholds the equality and dignity of persons.50 At a minimum, that commitment means treating others in a manner to which they have consented.

As Kant also argued, the fact that people can exercise moral autonomy gives them an inherent dignity that entitles them to be treated as ends and not merely as means to some other end.51 As a result, all people have a duty “to promote according to one’s means the happiness of others in need, without hoping for something in return.”52 A consequence of that duty is to care not just for the lives of others but for the quality of that life as well.

Similarly, enhancements could impair cognitive functioning in ways that make it impossible to attribute praise or blame to individuals.53 Together, these developments could change the professional identity of the military, which in turn will change the way society views and values the service the profession provides.

However, their service is valued differently, as evidenced by the controversy over awarding unmanned aerial vehicle operators a medal superior to the Bronze Star, which is normally reserved for those serving in combat zones.54 While such revaluation may not affect the relationship between political and military leaders, it can change how military service is regarded, how it is rewarded, and perhaps most importantly, who joins.

For example, AI systems associated with hiring, security, and the criminal justice system have demonstrated biases that have led to unjust outcomes independent of any biases developers or users might have.59 It is not hard to imagine similar biases creeping in to AI-driven targeting systems.

Robert Bales being found guilty of murdering 16 Afghan civilians in 2016, no one in the chain of command was held accountable for those murders.61 This is, in part, because, even under the idea of command responsibility, there has to be some wrongful act or negligence for which to hold the commander responsible.62 It also arises because one can hold Bales responsible.

The U.S. military, for example, sought and received a consent waiver to provide pyridostigmine bromide to counteract the effects of nerve agent by arguing a combination of military necessity, benefit to soldiers, inability to obtain informed consent, and lack of effective alternatives that did not require informed consent.66 Faced with a choice of risking some negative side effects to soldiers or significant casualties and possible mission failure, the Department of Defense conformed to Rawls’ condition that goods — in this case the right to consent — should only be sacrificed to obtain more of the same good.

Boyce points out, the department’s claim that obtaining consent was not feasible was really “code” for “non-consent is not acceptable.”68 This point simply underscores the importance of addressing one’s moral commitments regarding new technology early in the development and acquisition process.

Any technology that distances soldiers from the violence they do or decreases harm to civilians will lower the political risks associated with using that technology.70 The ethical concern here is to ensure that decreased risk does not result in increased willingness to use force.

Edward Snowden’s revelations that the U.S. government collected information on its citizens’ private communications elicited protests as well as legal challenges about the constitutionality of the data collection.71 While these revelations mostly raised civil rights concerns, the fact that other state and nonstate actors can conduct similar data collection also raises national security concerns.

Charles Dunlap observed back in 2014 that U.S. adversaries, both state and nonstate, could identify, target, and threaten family members of servicemembers in combat overseas, in a way that could violate international law.72 In what Dunlap refers to as the “hyper-personalization” of war, adversaries could use cyber technologies to threaten or facilitate acts of violence against combatants’ family members unless the combatant ceases to participate in hostilities.73 Adversaries could also disrupt family members’ access to banking, financial, government, or social services in ways that significantly disrupt their life.

Pervitin, for example, caused circulatory and cognitive disorders.74 Pyridostigmine bromide use is also closely associated with a number of long-term side effects including “fatigue, headaches, cognitive dysfunction, musculoskeletal pain, and respiratory, gastrointestinal and dermatologic complaints.”75 As noted above, the likelihood of these side effects were not fully taken into account due to inadequate testing at the time.

For example, a 2017 study catalogued a number of mental trauma, including moral disengagement as well as intensified feelings of guilt resulting from riskless killing among drone operators.76 Making matters even more complex, a 2019 study of British drone operators suggested that environmental factors, such as work hours and shift patterns, contributed as much, if not more so, to the experience of mental injury as visually traumatic events associated with the strikes themselves.77 Social Disruption Social disruption in this context has two components.

Singer observed a decade ago, multiple technologies are driving the military demographic toward being both older and smarter — the average age of the soldier in Vietnam was 22, whereas in Iraq it was 27.79 Further complicating the picture is the fact that those younger soldiers may be better suited to using emerging military technologies than those who are older and in charge.80 Not only could this pressure the military to reconsider how it distributes command responsibilities, it also pressures it to reconsider whom it recruits, as mentioned above.

Singer also notes, however, that contractors and civilians, who are not subject to physical or other requirements associated with active military service, may be better positioned to use these autonomous and semi-autonomous technologies.81 Doing so, especially in the case of contractors, could allow the military to engage in armed conflict while displacing health care and other costs to the private sector.

Consider: 80 percent of contractors who deployed to Iraq reported having health insurance for the time they were deployed, but that insurance was not available if they experienced symptoms after their return.82 If soldiers experience neither risk nor sacrifice, they are not really soldiers as currently conceived and are likely better thought of as technicians than warriors.  In addition, these trends could affect the professional status of the military.

Thus, if one left the justification for military measures to utilitarian calculations, then no technologies — including weapons of mass destruction — would be prohibited as long as one could make a reasonable case that harm to the enemy was maximized and risk to one’s own soldiers minimized.

However, as Walzer notes, while “the limits of utility and proportionality are very important, they do not exhaust the war convention.”88 That is because, even in war, people have rights, and those who do not pose a threat, whatever side of the conflict they are on, have a right not to be intentionally killed.89 As Arthur Isak Applbaum puts it, utility theory “fails to recognize that how you treat people, and not merely how people are treated, morally matters.”90 Thus, while aggression permits a response, it does not permit any response.

To be morally permissible, the effect of the means used has to not only conform to jus in bello norms associated with international humanitarian law and, more broadly, the just war tradition, but also to the obligations one owes members of one’s community — both soldiers and civilians alike.

International law prohibits the development and acquisition of weapons that intentionally cause unnecessary suffering, are indiscriminate in nature, or cause widespread, long-term and severe damage to the natural environment or entail a modification to the natural environment that results in consequences prohibited by the war convention.91 It goes without saying that these rules would apply to disruptive technologies.

In fact, Article 36 of the Geneva Convention’s Additional Protocol I specifically states, In the study, development, acquisition or adoption of a new weapon, means or method of warfare, a High Contracting Party is under an obligation to determine whether its employment would, in some or all circumstances, be prohibited by this Protocol or by any other rule of international law applicable to the High Contracting Party.92 The protocol, like the rest of the war convention, only addresses obligations to adversaries.

Boyce, in his discussion regarding the use of pyridostigmine bromide in the Gulf War, acknowledges the government’s claim that soldiers may be subjected to some risk of harm if it “promotes protection of the overall force and the accomplishment of the mission.”93 As noted above, such a utilitarian limit is helpful, in that it does restrict what counts as permissible by aligning it with the needs of other soldiers and citizens.

Sven Ove Hannson argues that permissions to expose others to risk should be based on one of the following justifications: self-interest, desert, compensation, and reciprocity.94 Since the concern here is coercively assigning risk, self-interest and reciprocity do not really apply, though, as the discussion on autonomy showed, the conditions governing self and mutual interest can shape interests in a way that is essentially coercive.

As he notes, “If a general principle sometimes is to a person’s advantage and never is to that person’s disadvantage (at least relative to the alternatives available), then actors who are guided by that principle can be understood to act for the sake of that person.”97 To illustrate, Applbaum draws on Bernard Williams’ thought experiment where an evil army officer who has taken 20 prisoners offers a visitor named “Jim” the choice of killing one person in order to save the remaining 19 or killing no one, which will result in the evil actor killing all 20.98 From the perspective of individual rights, Jim should not kill anyone.

To the extent Jim presents each prisoner with an equal chance of being killed, the prisoners can understand that he is giving them a chance for survival and that he is doing so for their own sake, even though one person will be killed.99 Thus, acting on the principle that it is fair to override consent in cases where no one is worse off and at least someone is better off seems a plausible justification for coercively assigning risk.100 This rationale, in fact, was a factor in the Federal Drug Administration’s (FDA) decision to grant the Department of Defense the pyridostigmine bromide waiver.101  Given equal chances of exposure to a nerve agent, everyone was in a better position to survive and since the expected side effects were not lethal, no one was in a worse position.

At the conference, Prussia requested that the scope be broadened to deal with any scientific discoveries that had military applications, but Britain and France opposed and the request was not adopted.104 Thus, acting on the principle that it is fair to override consent in cases where no one is worse off and at least someone is better off seems a plausible justification for coercively assigning risk.

Thus, something can only be necessary in relation to the available alternatives, including the alternative to do nothing.106 It is not enough that something works — it must work at the lowest cost.107 Under this view, any technology could be necessary as long as it provided some military advantage and there was no less costly means to obtain that advantage.

For example, alertness-enhancing amphetamines would be considered necessary as long as other means to achieve that alertness, such as adequate rest, were not available.108 Conceived this way, necessity is more a reason for violating norms rather than a norm itself.109 Invoking it when it comes to a disruptive technology gives permission to set aside concerns regarding moral permissibility and proceed with the technology’s introduction and use.

Take, for example, two alternatives that both achieve the same legitimate military purpose, but one does it with less cost and risk while violating a norm while the other entails slightly higher but bearable costs and risks while not violating any norms.110 Returning to the amphetamine example above, if it were possible to achieve the same number of sorties by training more aircrews or placing bases closer to targets, then on what basis would drugging pilots be necessary?

For example, the United States deployed nuclear weapons in Europe as a way of compensating for the Soviet Union’s superiority in terms of personnel and equipment.114 In doing so, it threatened what would arguably be an immoral, if not also unlawful, means to counter an enemy advantage as a result of a lawful military capability.

Even then, the use of nuclear weapons, if they could be justified at all, could only be justified in terms of a supreme emergency, which requires that a threat be grave and imminent.115 Einstein was right: The development of the atomic bomb, and I would argue any disruptive technology, should be conditioned on whether it avoids a disadvantage for one’s side that an adversary would likely be able to exploit.

In general, proportionality is a utilitarian constraint that requires an actor to compare the goods and harms associated with an act and ensure that the harm done does not exceed the good.116 In this way, proportionality is closely connected with necessity: For something to be necessary, it must already represent the most effective choice for pursuing the good in question.

However, as Walzer notes, “proportionality turns out to be a hard criterion to apply, for there is no ready way to establish an independent or stable view of the values against which the destruction of war is to be measured.”119 Even in conventional situations, it is not clear how many noncombatant lives are worth any particular military objective.

The decision to conduct air attacks against civilian population centers like Dresden was typically justified by the belief that doing so would incite terror and break German morale, ending the war sooner and saving more lives than the attack cost.

As Patrick Tomlin points out, however, by taking probabilities into account one can end up with the result that intending a larger harm with a low probability of success could be just as proportionate as intending to inflict a much smaller harm but with an equally low probability of resulting in the larger harm.

Thus, intending to kill someone with a low probability of success is proportionally equal to intending a lesser harm even though it comes with the same low probability that it will result in death.125 It seems counter-intuitive, however, that killing could, under any circumstances, be as proportionate as breaking a finger, assuming both resulted in a successful defense.

As he writes, “It matters what the defensive agent is aiming for, and the significance of that cannot be fully accounted for in a calculation which discounts that significance according to the likelihood of occurring.”126 A cyber operation to shut down an air traffic control system to force fatal aircraft collisions, even if it is unlikely to be successful, would be less proportionate than a cyber operation that disrupts an adversary’s electric grid, as Iran did to Turkey, even if there was similar loss of life.

Thus, under conditions of uncertainty, proportionality calculations should give greater weight to the intended harm, independent of its likelihood, and in so doing amplify the weight given to unintended harms.127 Thus, intending to kill someone with a low probability of success is proportionally equal to intending a lesser harm even though it comes with the same low probability that it will result in death.

As disruptive as the 2007 Russian cyber operations directed at Estonia were, it is not clear they would justify developing a prohibited technology in response.131 However, to the extent possession of a technology enables that resistance, and there is no other less morally risky alternative, then arguably the state should develop that technology.

David Pearce - An Organic Singularity: Will Humanity's Successors Be Our Descendants?

Genetic evolution is slow. Progress in artificial intelligence is fast. Only a handful of genes separate Homo sapiens from our hominid ancestors on the African ...

Adam Ford - Transhumanism - Antecedents & the Future - UMSS

Talk given at UMSS (Uni Melb Secular Society): - Transhumanism: - Up and coming conference: .

Genetic Engineering Will Change Everything Forever – CRISPR

Designer babies, the end of diseases, genetically modified humans that never age. Outrageous things that used to be science fiction are suddenly becoming ...

Can we create new senses for humans? | David Eagleman

As humans, we can perceive less than a ten-trillionth of all light waves. “Our experience of reality,” says neuroscientist David Eagleman, “is constrained by our ...

Michio Kaku - The Future of the Mind - Intelligence Enhancement & the Singularity

Dr. Michio Kaku advocates thinking about some of the radical Transhumanist ideas we all know and love - here he speaks on the frontiers of Neuroscience, ...

Nigel Ackland — Ordinary... Extraordinary — Life With A Bionic Arm

Global Future 2045: Towards a New Strategy for Human Evolution / New York City, 2013 Nigel Ackland Bionic Arm Man Former precious ..

Carboncopies Winter 2019 Workshop: Whole Brain Emulation and AI Safety

Carboncopies Workshop: Whole Brain Emulation and AI Safety This workshop was conducted on 2018-03-16, with guest panelists Jaan Tallinn, Dr. Anders ...

Cyborg | Wikipedia audio article

This is an audio version of the Wikipedia Article: 00:02:04 1 Overview 00:05:31 2 Origins 00:08:40 3 Cyborg tissues in ..

Biomechanoid | Wikipedia audio article

This is an audio version of the Wikipedia Article: 00:01:52 1 Overview 00:05:00 2 Origins 00:07:51 3 Cyborg tissues in ..