AI News, Swarming attack drones could soon be real weapons for the military ... artificial intelligence

ZCommunications » Trump Silicon Valley Supporters Win Secretive Military AI Contract

startup founded by a young and outspoken supporter of President Donald Trump is among the latest tech companies to quietly win a contract with the Pentagon as part of Project Maven, the secretive initiative to rapidly leverage artificial intelligence technology from the private sector for military purposes.

The Google flap and the wider military drive to adopt commercial artificial intelligence technology unleashed a fierce debate among tech companies about their role in society and ethics around advanced computing.

Anduril Industries is developing virtual reality technology using Lattice, a product the firm offers that uses ground- and autonomous helicopter drone-based sensors to provide a three-dimensional view of terrain.

The technology is designed to provide a virtual view of the front lines to soldiers, including the ability to identify potential targets and direct unmanned military vehicles into combat.

“Then we take that data and run predictive analytics on it, and tag everything with metadata, find what’s relevant, then push it to people who are out in the field.” “Practically speaking, in the future, I think soldiers are going to be superheroes who have the power of perfect omniscience over their area of operations, where they know where every enemy is, every friend is, every asset is,” he said.

In 2017, as part of an initiative that had begun the previous year, the Defense Department also unveiled the Algorithmic Warfare Cross-Functional Team, known as Project Maven, to harness the latest artificial intelligence research into battlefield technology, starting with a project to improve image recognition for drones operating in the Middle East.

The Defense Innovation Unit, said Luckey, proved “that people in Silicon Valley could actually get stuff into production, actually do work with the government.” He added, “I don’t think that I would have started this company if it wasn’t for the work of people like Raj Shah doing great work and proving that you actually could get into it.” Building out major government contracts is an inherently political endeavor — something that appears not to be lost on Luckey.

The lobbying effort focused on shaping the border security appropriations issued by Congress, as well as on educating lawmakers on “artificial intelligence and autonomous systems and their application to military force protection,” according to the filings.

Among his political largesse, Luckey donated to political action committees supporting Trump, the senior lawmakers on the defense and appropriations committees, and a number of controversial conservative lawmakers, including Rep. Steve King, R-Iowa, who has defended white supremacy and questioned the contributions of nonwhite people to society.

That approach has seen the Defense Department negotiate with contractors to provide a fixed price for expenses and profits, one that, in Luckey’s telling, has limited the military’s ability to encourage the kind of breakthrough technologies needed for the future of war.

China, an Anduril employee wrote in the paper, has provided a “multibillion-dollar national investment initiative to support ‘moonshot’ projects, start-ups and academic research in A.I.” Even as it seeks to shake up the model for contracts, though, Anduril is also embracing the traditional approach.

As the military worked to bring in leading Silicon Valley firms as contractors, the resulting relationships have sparked massive resistance from workers, many of whom have argued that they became engineers to make the world a better place, not a more violent one.

After the The Intercept and other media outlets revealed that Google had been quietly tapped to work on Project Maven, applying its AI technology to help analysts identify drone targets on the battlefield, thousands of workers protested the contract.

But ostracizing the U.S. military could have the opposite effect of what these protesters intend: If tech companies want to promote peace, they should stand with, not against, the United States’ defense community.” What was left out of the column, however, was that, as the piece went to print, Anduril was beginning its own work on Project Maven.

In interviews and public appearances, Luckey slammed engineers for protesting government work, arguing that those claiming conscious opposition to military work are among a “vocal minority” that empowers American adversaries abroad.

Moreover, he said that the Defense Department has failed to connect with top tech talent because many engineers are “stuck in Silicon Valley at companies that don’t want to work on national security.” In Anduril, Luckey is presenting a company that is unapologetic about its work capturing immigrants or killing people on the battlefield.

In contrast, Luckey told Defense and Aerospace Report, the U.S. can train its AI software “in industry, in enterprise, in national security.” The U.S., Luckey went on, could test AI “using our current military advantage to train future AI developments and we need to start using our current military advantage.” He called for employing these technologies in ongoing “large-scale conflicts” around the world.

Why Military Blockchain is Critical in the Age of Cyber Warfare

Today, warfighters use connected devices to coordinate air strikes on the battlefield, drones are controlled from thousands of miles away, commanders watch real-time video of streaming videos of the battle space, and logistics and the broader supply chain are regulated and managed by complex digitally technologies.

More disturbing, the Royal Institute of International Affairs (also known as “Chatham House”), a not-for-profit and non-governmental organization based in London whose mission is to analyze and promote the understanding of major international issues and current affairs, warned in a January 2018 report that U.S., British and other nuclear weapons systems are increasingly vulnerable to cyber-attacks.

As a result, current nuclear strategy often overlooks the widespread use of digital technology in nuclear systems…The likelihood of attempted cyber-attacks on nuclear weapons systems is relatively high and increasing from advanced persistent threats from states and non-state groups” This is far from an abstract threat.

In 2010, the U.S. Air Force (USAF) lost contact with a field of 50 Minuteman III ICBMs at FE Warren Air Force Base in Wyoming for an hour, raising the terrifying protect that an enemy actor might have taken control of the missiles and was feeding incorrect information into the nuclear command-and-control networks.

The board found that the military’s systems were vulnerable, and that the government was “not prepared to defend against this threat.” The report warned that in successful cyber attack military commanders could lose “trust in the information and ability to control U.S. systems and forces.” The report emphasized that “systems and forces” include nuclear weapons and related nuclear command, control, and communications systems.

For example, the Nuclear Threat Initiative (NTI) (a U.S. non-profit funded by CNN founder Ted Turner) published a report on cyber risks to nuclear weapons systems and offered recommendations developed by a group of high-level former and retired government officials, military leaders, and experts in nuclear systems, nuclear policy, and cyber threats.

Arms Control Today

“The U.S. military has talked about the strategic importance of replacing ‘king’ and ‘queen’ pieces on the maritime chessboard with lots of ‘pawns,’ and ACTUV is a first step toward doing exactly that.” The Navy is not alone in exploring future battle formations involving various combinations of crewed systems and swarms of autonomous and semiautonomous robotic weapons.

Although the rapid deployment of such systems appears highly desirable to Work and other proponents of robotic systems, their development has generated considerable alarm among diplomats, human rights campaigners, arms control advocates, and others who fear that deploying fully autonomous weapons in battle would severely reduce human oversight of combat operations, possibly resulting in violations of the laws of war, and could weaken barriers that restrain escalation from conventional to nuclear war.

Yet, such systems cannot independently search for and strike enemy assets on their own, and human operators are always present to assume control if needed.3 Many air-to-air and air-to-ground missiles are able to attack human-selected targets, such as planes or tanks, but cannot hover or loiter to identify potential threats.

As described by the U.S. Congressional Research Service, autonomy is “the level of independence that humans grant a system to execute a given task.” Autonomy “refers to a spectrum of automation in which independent decision-making can be tailored for a specific mission.” Put differently, autonomy refers to the degree to which humans are taken “out of the loop” of decision-making, with AI-empowered machines assuming ever-greater responsibility for critical combat decisions.

Under prevailing U.S. policy, as enshrined in a November 2012 Defense Department directive, “autonomous and semi-autonomous weapons systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Yet, this country, like others, evidently is developing and testing weapons that would allow for ever-diminishing degrees of human control over their future use.

“The process to improve RAS autonomy,” the Army explained in 2017, “takes a progressive approach that begins with tethered systems, followed by wireless remote control, teleoperation, semi-autonomous functions, and then fully autonomous systems.”5 Toward this end, the Army is proceeding to acquire the SMET, an unmanned vehicle designed to carry infantry combat supplies for up to 60 miles over a 72-hour period.

while a swarm of robot ships would be more difficult to target and losing even a dozen of them would have a lesser effect on the outcome of combat.7 The Army appears to be thinking along similar lines, seeking to substitute robots for dismounted soldiers and crewed vehicles in highly exposed front-line engagements.

Military planners around the world are fully aware of the robotic ambitions of their competitors and are determined to prevail in what might be called an “autonomy race.” For example, the U.S. Army’s 2017 Robotic and Autonomous Systems Strategy states, “Because enemies will attempt to avoid our strengths, disrupt advanced capabilities, emulate technological advantages, and expand efforts beyond physical battlegrounds…the Army must continuously assess RAS efforts and adapt.” Likewise, senior Russian officials, including President Vladimir Putin, have emphasized the importance of achieving pre-eminence in AI and autonomous weapons systems.

“Despite [the Defense Department’s] insistence that a ‘man in the loop’ capability will always be part of RAS systems,” the CRS noted in 2018, “it is possible if not likely, that the U.S. military could feel compelled to develop…fully autonomous weapon systems in response to comparable enemy ground systems or other advanced threat systems that make any sort of ‘man in the loop’ role impractical.”8 Assessing the Risks Given the likelihood that China, Russia, the United States, and other nations will deploy increasingly autonomous robotic weapons in the years ahead, policymakers must identify and weigh the potential risks of such deployments.

“Unfortunately, the uncertainties surrounding the use and interaction of new military technologies are not subject to confident calculation or control,” he wrote in 2018.10 This danger is all the more acute because, on the current path, autonomous weapons systems will be accorded ever-greater authority to make decisions on the use of lethal force in battle.

This could occur as a deliberate decision, such as when a drone is set free to attack targets fitting a specified appearance (“adult male armed with gun”), or as a conditional matter, as when drones are commanded to fire at their discretion if they lose contact with human controllers.

Proportionality requires militaries to apply no more force than needed to achieve the intended objective, while sparing civilian personnel and property from unnecessary collateral damage.11 These principles pose a particular challenge to fully autonomous weapons systems because they require a capacity to make fine distinctions in the heat of battle.

“Humans possess the unique capacity to identify with other human beings and are thus equipped to understand the nuances of unforeseen behavior in ways that machines, which must be programmed in advance, simply cannot,” analysts from Human Rights Watch (HRW) and the International Human Rights Clinic of Harvard Law School wrote in 2016.12 Another danger arises from the speed with which automated systems operate, along with plans for deploying autonomous weapons systems in coordinated groups, or swarms.

Strategies for Control Since it first became evident that strides in AI would permit the deployment of increasingly autonomous weapons systems and that the major powers were seeking to exploit those breakthroughs for military advantage, analysts in the arms control and human rights communities, joined by sympathetic diplomats and others, have sought to devise strategies for regulating such systems or banning them entirely.

As part of that effort, parties to the Convention on Certain Conventional Weapons (CCW), a 1980 treaty that restricts or prohibits the use of particular types of weapons that are deemed to cause unnecessary suffering to combatants or to harm civilians indiscriminately, established a group of governmental experts to assess the dangers posed by fully autonomous weapons systems and to consider possible control mechanisms.

Such a ban could come in the form a new CCW protocol, a tool used to address weapon types not envisioned in the original treaty, as has happened with a 1995 ban on blinding laser weapons and a 1996 measure restricting the use of mines, booby traps, and other such devices.13 Two dozen states, backed by civil society groups such as the Campaign to Stop Killer Robots, have called for negotiating an additional CCW protocol banning fully autonomous weapons systems altogether.

States could be required to subject proposed robotic systems to predeployment testing, in a thoroughly transparent fashion, to ensure they were compliant with these constraints.14 Those who favor a legally binding ban under the CCW claim this alternative would fail to halt the arms race in fully autonomous weapons systems and would allow some states to field weapons with dangerous and unpredictable capabilities.

Proponents of this approach point to the Martens clause of the Hague Convention of 1899, also inscribed in Additional Protocol I of the Geneva Conventions, stating that even when not covered by other laws and treaties, civilians and combatants “remain under the protection and authority of the principles of international law derived from established custom, from the principles of humanity and from the dictates of human conscience.” Opponents of fully autonomous weapons systems claim that such weapons, by removing humans from life-and-death decision-making, are inherently contradicting principles of humanity and dictates of human conscience and so should be banned.

Diplomats and policymakers must seize this moment before fully autonomous weapons systems become widely deployed to weigh the advantages of a total ban and consider other measures to ensure they will never be used to commit unlawful acts or trigger catastrophic escalation.  ENDNOTES 1. For a summary of such efforts, see Congressional Research Service (CRS), “U.S.

YouTube accused of making it too easy to build killer drones

It is “terrifyingly easy” to build a killer drone that can identify targets and make decisions to fire on its own, experts have warned.

Fears are growing that the snowballing production of deadly autonomous weapons could result in terrorist attacks and airports being held to ransom by individuals and extremist groups.

Scharre said the threat from drones is only going to “become more challenging as more people build autonomous drones and we need to prepare for that.” The defense expert, who wrote “Army of None: Autonomous Weapons and the Future of War,”

He raised concerns that airports, concert venues, sports stadiums and government buildings in the UK are vulnerable to imminent attack from drone swarms and autonomous weapons.

“The government is further strengthening the law by extending the no-fly zone around airports and from November all drone users must be registered and tested – which will help hold illegal drone users to account.” President Donald Trump signed an executive order earlier this month meant to spur the development and regulation of artificial intelligence.

The order aimed to improve access to the cloud computing services and data needed to build AI systems and promote cooperation with foreign powers.

One example is the Blowfish A2 drone, which China exports internationally and which Allen says is advertised as being capable of “full autonomy all the way up to targeted strikes.” The Blowfish A2 “autonomously performs complex combat missions, including fixed-point timing detection and fixed-range reconnaissance and targeted precision strikes.” Depending on customer preferences, Chinese military drone manufacturer Ziyan offers to equip Blowfish A2 with either missiles or machine guns.

Allen wrote: “Though many current generation drones are primarily remotely operated, Chinese officials generally expect drones and military robotics to feature ever more extensive AI and autonomous capabilities in the future.

“Chinese weapons manufacturers already are selling armed drones with significant amounts of combat autonomy.” Russia has also unveiled a new and deadly kamikaze drone after it “successfully completed”

“This is an extremely precise and very effective weapon, incredibly hard to fight by traditional air defense systems,” said Sergey Chemezov, head of Rostec, a Russian state giant in charge of development strategic arms companies.


Killer drone arms, articial intelligence an increasingly real fiction, Social and Smart Phone Facial Recognition, Smart swarms, Warning ! SUBSCRIBE OUR ...

Rise of the Terminators - Military Artificial Intelligence (AI) | Weapons that think for Themselves

Weapons and warfare have become increasingly sophisticated; the latest battlefield technology is starting to look more like a computer game with wirelessly ...

The Threat of AI Weapons

Will artificial intelligence weapons cause World War III? This animated clip is from my friends at ..

Autonomous killer drones

The palm-sized quadcopters use real-time data mining and artificial intelligence to find and kill their targets. The makers of the seven-minute film titled ...

Meet the dazzling flying machines of the future | Raffaello D'Andrea

When you hear the word "drone," you probably think of something either very useful or very scary. But could they have aesthetic value? Autonomous systems ...

Fictional 'Slaughterbots' film warns of autonom...

In 'Slaughterbots,' autonomous drones use artificial intelligence to decide who to kill. The short film was commissioned by college professors and researchers to ...

Disturbing simulation shows power, terror of killer robots

What would you do if an army of drones opened fire on you? This terrifying simulation video is making the rounds on social media and prompting conversation ...

Palm Sized Flying Killer Robots a.k.a Slaughterbots

Follow us on Facebook ▷ - - Hedgehog youtube worldwide news latest recent end times sign signs event events ..

F/A-18 Super Hornets Launch 103 Perdix Drone Swarm

Department of Defense Announces Successful Micro-Drone Demonstration - Press Operations Release No: NR-008-17 Jan. 9, 2017 In one of the most ...

Pentagon's mini-drones swarm like bees

The Pentagon successfully showed off its Perdix miniature drones which can work together to attack enemies like a swarm of killer bees.