AI News, Artificial Intelligence in the US Army artificial intelligence

Artificial Intelligence, climate change and the U.S military

The Artificial Intelligence field (AI) is creating a continuity that encompasses climate change science and the preparedness of the U.S. military to climate risks.

In 2100, the land where 200 million people live today could be submersed daily (Climate Central, “Report: Flooded Future: Global vulnerability to sea level rise worse than previously understood», October 29, 2019).

A rapid internet search allows us to find the report cited in a few articles and posted in a pdf version on internet journals, such asViceandPopular Mechanics.

Nonetheless, this document establishes that adapting to the violent ecological, military, political, economic and social consequences of climate change is a dire and imperative necessity for the Army and for the entire U.S. military.

Now, the lowest and most densely populated coastlines, as in Bangladesh, Vietnam, China, Indonesia, Thailand, the Netherlands, and Louisiana, among others, 237 to 300 million people will be threatened by annual flooding in 2050.

In a previous article, we saw how the U.S. Army research branch makes use of climate change research in order to define and propose a massive military adaptation effort (Jean-Michel Valantin, “The U.S Army versus a Warming Planet”, The Red (Team) Analysis Society, November 12, 2019).

So, in military terms, AI will support and optimize the deployment of mechanical ground forces on theatres of operations (Hélène Lavoix, “Sensor and actuator (4): Artificial Intelligence, the Long March towards Advanced Robots and Geopolitics”, The Red (Team) Analysis Society, May 13, 2019).

It is difficult not to think that, in the parts about the use of artificial intelligence, the authors are not alluding to the current massive militarization of AI by the Chinese military, both in training and at the operational and decision-making levels (Jean-Michel Valantin, “Militarizing Artificial Intelligence – China (1) and (2)“,The Red Team Analysis Society, April 23, 2018).

What motivates these military recommendations is the rapid multiplication of multidimensional risks (Jean-Michel Valantin, “The Midwest, the Trade war and the Swine Flu pandemic: the Agricultural and Food Super –Storm is Here”, The Red (Team) Analysis, June 3, 2019), as those the Climate Central report defines about sea-level rise.

Those cascades are becoming an “entity” that is besieging contemporary societies (Jean-Michel Valantin, “Hyper siege: Climate Change and U.S National Security”,The Red (Team) Analysis Society, March 17, 2014 and “The U.S Navy vs Climate and ocean change”,The Red (Team) Analysis, June 11, 2018, and David Wallace-Wells, The Unhinabitable Earth, Life After Warming, 2019).

So, AI power unveils itself (Hélène Lavoix, “When Artificial Intelligence will Power Geopolitics-Presenting AI”, The Red (Team) Analysis Society, November 27, 2017), through scientific research and military preparedness, as a tool and a possible “ally” in the face of the rapidly coming “perfect climate and social super storm”.

This new AI power will be useful for adapting to the planetary crisis and its cascade of hyper violent consequences (Jean-Michel Valantin, “The Planetary Crisis Rules”, part 1, 2, 3, 4, 5, The Red (Team) Analysis Society).

Artificial Intelligence and the Future of Conflict

Table of Contents It is hard to predict the exact impact and trajectory of technologies enabled by artificial intelligence (AI).1 Yet these technologies might stimulate a civilizational transformation comparable with the invention of electricity.2 AI applications will change many aspects of the global economy, security, communications, and transportation by altering how humans work, communicate, think, and decide.

Because the development of AI, machine learning, and autonomous systems relies on factors such as data, workforces, computing power, and semiconductors, disparities in how well different countries harness these technologies may widen in the future.

From the use of autonomous systems to the transformation of command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) capabilities, and from intelligence processing to cognitive security, AI will change how wars are planned and fought.

As disruptive technologies provide new tools for totalitarian regimes and extremist groups, the transatlantic community needs to develop solutions to mitigate the malicious use of intelligent machines.

The first category is the physical domain, in which ballistic missiles, main battle tanks, aircraft, the weaponry of ground infantries, and other military hardware are used to degrade or destroy an adversary’s physical resources.

Here, each side tries to gain superiority by improving the way information is shared, connecting space-based intelligence to weapons systems, or calculating the trajectory of an incoming ballistic missile.

This will be an unending task because new concepts will need to constantly change to keep up with countermoves such as adversarial algorithms and data-poisoning attempts, which involve feeding adversarial data to AI systems.

Advances in neuroscience, behavioral biology, and other fields will enable new technological leaps such as human-machine teaming and increased autonomy in military systems.3 Robotic swarms—the “collective, cooperative dynamics of a large number of decentralized distributed robots,”

in the words of AI researcher Andrew Ilachinski—form another field in which computer science and robotics follow in biology’s wake.4 Human-machine collaboration is likely to bring about faster and better decisionmaking by enabling enhanced management of massive data streams.

Cognitive hacking, a form of attack that seeks to manipulate people’s perceptions and behavior, takes place on a diverse set of platforms, including social media and new forms of traditional news channels.

Cognitive security is a new multisectoral field in which actors engage in what Waltzman called “a continual arms race to influence—and protect from influence—large groups of people online.”6 AI could cause drastic changes in hybrid warfare, which is a major concern for NATO.

The Chinese government and Chinese companies have invested significantly in expanding their computing power and semiconductor capabilities to narrow the gap with actors in the West and develop an independent industrial base.8 At present, the United States is the leading AI power, while China is emerging as an aspirant challenger.

Reportedly, additional bills are being prepared to counter risks of disinformation and label AI-enabled fake content a threat to national security.9 Parliamentary groups in the UK and Australia have proposed legislative measures to prevent similar harmful use of digital platforms.

More than thirty countries and international organizations have strategies and initiatives for artificial intelligence.10 These have varying priorities, from taking advantage of military clout (United States) to proposing values-based AI (European Union) and from leveraging leadership in AI research (Canada, China) to driving military-civilian fusion (China).11 This diversity continues to evolve both inside and outside NATO.

European policymakers often emphasize protecting core values, regulating big tech, and preventing malign actors from using AI and accompanying technologies to target Western political institutions, public safety, and individuals.

In 2018, a consortium of U.S. and European experts from industry, civil society, and research institutions published a report that outlined three areas of concern.12 The first is the digital security domain, in which the report warned of potential AI vulnerabilities that would allow adversaries to stage large-scale, diversified attacks on physical, human, and software targets.

The transatlantic community will therefore have a full set of tasks on its plate, from observing how such dynamics develop in different regions to building international partnerships to ensure common interests and regulatory actions.13 NATO would benefit from initiatives to prepare for, govern, and regulate AI-related policy priorities.

The alliance has a long way to go in developing algorithmic warfare capabilities and adopting an AI-enabled C4ISR structure.14 Because most innovations in AI and robotics come from outside the military-industrial complex, some studies have encouraged the alliance to cooperate closely with big tech or develop ties with promising start-ups.15 The interdisciplinary conversation needs to go beyond tech companies.

Ideally, red teaming—in which a group adopts an adversarial point of view to challenge an organization to improve its effectiveness or detect a major weakness—and experimentation efforts should cover both allied exercises and more isolated, peacetime activities to test defenses in national security apparatuses.

Cleaner water through fluorescence spectroscopy and artificial intelligence

Miller is making use of the conventional data, including all the research that's out there, the known chemistry, to make calculations that water utilities are not going to actually perform, because they lack the technical training, he said.

“The software also continues to support new regulatory challenges including the recently released (United States Environmental Protection Agency’s) proposed Lead and Copper Rule (LCR), which includes a suite of actions to reduce lead exposure in drinking water.

If nobody is actually taking a full scans of fluorescent samples, you can't build the model for that plant because you don't have the experimental data to say, here's what's happening.”