AI News, Artificial Intelligence and National Security artificial intelligence

I Quit My Job to Protest My Company’s Work on Building Killer Robots

When I joined the artificial intelligence company Clarifai in early 2017, you could practically taste the promise in the air.

We founded Clarifai 4 Good where we helped students and charities, and we donated our software to researchers around the world whose projects had a socially beneficial goal.

Some background: In 2014, Stephen Hawking and Elon Musk led an effort with thousands of AI researchers to collectively pledge never to contribute research to the development of lethal autonomous weapons systems — weapons that could seek out a target and end a life without a human in the decision-making loop.

The core issue is whether a robot should be able to select and acquire its own target from a list of potential ones and attack that target without a human approving each kill.

Those supporting the creation of autonomous weapons systems would prefer that human to be not “in the loop but “on the loop” — supervising the quick work of the robot in selecting and destroying targets, but not having to approve every last kill.

When presented with the Harop, a lot of people look at it and say, “It’s scary, but it’s not genuinely freaking me out.” But imagine a drone acquiring a target with a technology like face recognition.

Imagine this: You’re walking down the street when a drone pops into your field of vision, scans your face, and makes a decision about whether you get to live or die.

If you add machine learning to the mix, you’re looking at a system that can sift through exponentially increasing numbers of potential threats over a vast area.

Predictive technologies like face recognition or object localization are guaranteed to have error rates, meaning a case of mistaken identity can be deadly.

Often these technologies fail disproportionately on people with darker skin tones or certain facial features, meaning their lives would be doubly subject to this threat.

Your algorithm did XYZ, and everyone wants to know why, but because of the way that machine learning works, even its programmers often can’t know why an algorithm reached the outcome that it did.

Now, when you enter the realm of autonomous weapons, and ask, “Why did you kill that person,” the complete lack of an answer simply will not do — morally, legally, or practically.

In such scenarios, a soldier will be able to use their human, moral judgment in deciding how to react — and can be held accountable for those decisions.

If a machine is programmed to make quick decisions about how and when to fire a weapon, it’s going to do it in ways we humans can’t even anticipate.

Add 3D printing to the mix, and now it’s cheap and easy to create an army of millions of tiny (but lethal) robots, each one thousands of times faster than a human being.

With so many tech companies participating in work that contributes to the reality of killer robots in our future, it’s important to remember that major powers won’t be the only ones to have autonomous drones.

Official Defense Department policy states currently that there must be a “human in the loop” for every kill decision, but that is under debate right now, and there’s a loophole in the policy that would allow for an autonomous weapon to be approved.

I truly hope that the industry changes course and agrees to take responsibility for its work to ensure that the things we build in the private sector won’t be used for killing people.

Intel’s Recommendations for the U.S. National Strategy on Artificial Intelligence (White Paper)

A comprehensive national AI strategy would earmark funding and resources for AI research and development, outline clear goals and accountability mechanisms, identify and remove barriers, drive public and private development and adoption of AI, and outline a program to mitigate negative or unintended consequences.

The adoption of comprehensive federal privacy legislation, and policies that require accountability for ethical design and implementation, are critical to ensuring protections are in place for efficient data sharing practices that mitigate potential individual and societal harm.

There is no doubt that AI, including machine learning and deep learning algorithms, as well as the hardware to accelerate them, is a transformative technology.2 AI is already providing profound capabilities and benefits that were not achievable just a few years ago.3 Looking to the future, AI has the potential to help us solve some of humanity’s biggest challenges.

2017 report by PWC estimated that AI technologies could increase global GDP by $15.7 trillion by 2030, representing a potential 20% increase, with $3.7 trillion of that increase happening in North America.4 Many countries, in an effort to be the first to reap the benefits of AI and lead the AI revolution, have published AI national strategies and committed significant resources to the development and deployment of AI.5 In the U.S., the technology sector has long been a driver of global economic growth, bolstered by the U.S.’s unique approach to public-private partnerships.

In 2017, China’s State Council issued a comprehensive plan to build a domestic AI industry forecasted to be roughly $150 billion by 2030 (plus related industries valued at nearly $1.5 trillion).6 In 2017 alone, China’s AI industry received nearly 180 billion yuan ($26B USD) of investment and financing.7 AI-related patent submissions in China almost tripled between 2010 and 2014 compared with the previous five years, and Chinese investments account for 48 percent of global AI startup funding,8 while South Korea recently announced a roughly $900 million investment in AI over five years.9 This shows that government is in a unique position to advance the development and deployment of AI through leveraging existing expertise within the government and funding commitments.

Prior to that, in May of 2018, the White House launched a task force dedicated to AI as well as a pilot program looking into ways to use AI to accelerate federal auditing.10 Additionally, within government we’ve seen the National Institutes of Health studying how recent AI advances such as deep learning could improve algorithms for cancer detection and treatment.11 The U.S. government can and should help build on the U.S. ethos of innovation and technology leadership by committing to a National AI Strategy that advances its industrial competitive advantage, improves quality of life for the population, and maintains the nation’s AI technology leadership on the world stage.

In addition, given AI’s reliance on vast amounts of data, any AI strategy requires responsible data liberation that allows for data use while protecting individual privacy, mitigating potential bias, and enhancing cybersecurity.12 The following sections define the four key pillars of Intel’s recommended National AI Strategy.

2016 McKinsey Global Institute report found that 45 percent of work activities could potentially be automated by currently demonstrated technology, and machine learning, a subset of AI, might enable automation of up to 80 percent of those activities.14 Workforce disruptions are not a new phenomenon, especially in the face of productivity enhancing technology innovations.

President Trump’s Executive Order on Artificial Intelligence

The order declares: “It is the policy of the United States Government to sustain and enhance the scientific, technological, and economic leadership position of the United States in AI R&D and deployment through a coordinated Federal Government strategy.” If this vision is implemented fully (a big “if”), maintaining and enhancing U.S. leadership on AI would be a major policy achievement.

For example, one of the strategic objectives set forth in the order is the following: “Enhance access to high-quality and fully traceable Federal data, models, and computing resources to increase the value of such resources for AI R&D, while maintaining safety, security, privacy, and confidentiality protections consistent with applicable laws and policies.” This is important.

China has a disproportionate advantage over other countries in terms of the volume of data about human behavior to which its AI developers have access because of its intensive (and repressive) collection of data about the activities of its very large population and its likely theft of data from countries around the world.

One of the five principles that guide the policy set forth in the executive order includes the goal of “protecting our critical AI technologies from acquisition by strategic competitors and adversarial nations.” Indeed, doing so is essential for the long-term national security and economic well-being of the United States.

A related objective the order describes is to “[e]nsure that technical standards [developed by the federal government pursuant to the order] minimize vulnerability to attacks from malicious actors.” Protecting the integrity of AI technology from a physical and cybersecurity perspective is also essential in order to make sure that our AI systems work as intended.

In addition, the order requires that federal agencies implementing it: Develop and implement an action plan, in accordance with the National Security Presidential Memorandum of February 11, 2019 (Protecting the United States Advantage in Artificial Intelligence and Related Critical Technologies) (the NSPM) to protect the advantage of the United States in AI and technology critical to United States economic and national security interests against strategic competitors and foreign adversaries.

(a) As directed by the NSPM, the Assistant to the President for National Security Affairs, in coordination with the OSTP Director and the recipients of the NSPM, shall organize the development of an action plan to protect the United States advantage in AI and AI technology critical to United States economic and national security interests against strategic competitors and adversarial nations.

And in addition to protecting technology, they must focus on protecting the people who have relevant AI expertise and making sure that our immigration policies allow us to attract, educate and retain the best AI minds in the world.

AI development requires access to big data, but we don’t want things to get out of hand—for example, we don’t want agencies making decisions on important privacy questions based on narrow-minded and overly aggressive legal analysis that fails to take into consideration the broader legal, policy and reputational interests potentially at stake.

The failure to offer a better definition is a real miss, and I fear it will have collateral consequences for the government’s ability to adequately fund research and development of AI systems, protect the rights of Americans as related to AI and secure AI systems from hostile foreign actors.

Artificial Intelligence for Military Use and National Security

Courtney Bowman (Privacy and Civil Liberties Team Lead, Palantir), Avril Haines (Former White House Deputy National Security Advisor; Former Deputy ...

Artificial Intelligence and National Security: The Importance of the AI Ecosystem

Join the Defense Industrial-Initiatives Group and the International Security Program for a discussion on national security, artificial intelligence, and the nexus ...

The Future of Artificial Intelligence for National Security

Artificial intelligence helps us comb through mountains of data more efficiently, positioning leaders to make decisions that are more data-driven and informed.

President Barack Obama on What AI Means for National Security | WIRED

WIRED guest editor President Barack Obama, WIRED editor in chief Scott Dadich and MIT Media Lab director Joi Ito discuss the challenges of cyber security in ...

CSIS' Hunter on Artificial Intelligence and National Security

Andrew Hunter, the director of the Defense Industrial Initiatives Group at the Center for Strategic and International Studies, discusses the think tank's new report ...

Artificial Intelligence: a Silver Bullet in Cyber Security? CPX 360 Keynote

Artificial Intelligence is the Industrial Revolution of our time. Presented by Orli Gan, Head of Product Management and Product Marketing, Threat Prevention at ...

Machine Learning & Artificial Intelligence for the National Security Mission (full version)

Machine Learning and Artificial Intelligence hold the potential to revolutionize the national security mission. In this presentation from the 2017 SAP NS2 ...

CNAS 2018: Artificial Intelligence and National Security

An expert discussion on the potential consequences of advances in artificial intelligence for the national security community.

Artificial intelligence and national security (Military aspect)- Part I

आर्टिफिशिअल​ इंटेलिजन्सची राष्ट्रीय सुरक्षेतील विशेषत: पारंपारिकदृष्...

The Future of Artificial Intelligence Documentary 2018

The Future of Artificial Intelligence Documentary 2018 Buy PS4 Fifa 17 here! Thanks For watchin