AI News, It is the first known example of a government intentionally using ... artificial intelligence

Military readiness through AI How technology advances help speed up our defense readiness

In July 1950, a small group of American soldiers called Task Force Smith were all that stood in the way of an advance of North Korean armor.

In fact, some senior military leaders think that AI will be more important to great power competition than military power itself.2 The military needs a strong plan now if it does not want to find itself shooting useless algorithms at its most challenging problems tomorrow.

In previous research, we have described how redefining readiness can help bring new tools and technologies to bear and provide greater insight than ever before.3 At its core, this redefining breaks readiness assessments into three smaller tasks: You have to understand what capabilities are required, to know the current status of those capabilities, and to act to improve those capabilities where needed.

Until future research breakthroughs create a general purpose and context-aware AI, users must make informed choices about the trade-offs inherent in different AI tools.4 Perhaps the most basic trade-off is between depth of insight and model complexity, which is at the heart of any discussion of assessing military readiness.

The “as-needed” graph of capabilities can then be compared to an “as-is” graph (figure 5) created by compiling the current status of all equipment, personnel, and infrastructure from real-time data.

Enemy capabilities are accounted for in historical mission data, so this method predicts future demands based on past performance, and therefore does not include an agent-based or adaptive red force of the future.

This second method creates a fuller, more complex picture, by inputting the “as-is” picture of the current status of all assets into a scenario analysis tool that can model the full set of assigned missions.

This approach can answer questions like, “Can the C-5s reach the airfield in time?” or “Can the helicopters assigned to the mission fit the raid force’s M327 120mm mortars?” It also allows for multiple scenarios to run against the “as-is” picture of the force concurrently.

Even more importantly, this method allows for agent-based simulation to be combined with the breadth of data and variation that AI can provide, creating the most realistic depiction possible of adversary capabilities and courses of action (figure 6).

Scenario analysis tools form a large part of meeting the National Defense Strategy Commission’s recommendation that the Department of Defense “must use analytic tools that can measure readiness across this broad range of missions, from low-intensity, gray-zone conflicts to protracted, high-intensity fights.”5 In short, this more detailed approach can not only help the military be ready for the fight today but also set appropriate force posture to be ready for future fights.

A combination of general AI practices and custom considerations can help military leaders navigate this tangle of choices and chart a path to a fundamentally new readiness system.

Therefore, the first challenge is to discover ways to render a general readiness problem into specific questions suitable for AI without losing fidelity or applicability to the real work problem at hand.

But unless that hard thinking is done up front, any solution generated by AI could be largely irrelevant to the mission problems faced in the real world.7 The best starting point when dealing with such significant volumes of data is often going to be the cloud, which allows for a single, extensible repository.

In fact, our estimates suggest that by 2020, 87 percent of AI users will get at least some of their AI capabilities from cloud-based enterprise software.8 The support available from cloud providers underlines the importance of getting all of the data in the first place.

Previous research has shown that even adoption of transformational technology can often be accomplished by focusing on the existing data that an organization has without the need for new capital investments.9 Another common problem for AI adoption is ensuring the accuracy of tools.

Even the most advanced AI tools are still tools constructed by humans and, as such, can often mirror the judgments and biases of humans.10 For example, an AI-based system for assessing a prisoner’s risk of recidivism to help aid in setting bail and sentencing turned out to have a significant built-in racial bias.

Prisoners wrongly labeled as high-risk were twice as likely to be black while those wrongly labeled as low-risk were more likely to be white.11 Since they often reflect quirks of human judgment or issues with training data, these types of biases can be hard to uncover and eliminate.

One way to help ensure the desired accuracy of an AI system is to use participatory design, a process that includes a wide array of stakeholders, not just programmers and end-users, in the design process.12 This can help ensure a variety of perspectives are included in a simulation and that the right performance parameters are selected.

Since the enemy does not play by the same rules, to avoid AI tools that are not unintentionally biased toward our own strategies—and therefore predict overly rosy outcomes—it is crucial to include a “red team” dedicated to playing devil’s advocate in the design process.

If those lower-level models are wrong, they can result in serious inaccuracies in a simulation, with aircraft flying faster than possible or never running out of fuel, or ground units walking for hundreds of miles without getting tired.

Readiness personnel should work with their acquisition counterparts to gather or gain access to that information for current systems and ensure that future contracts have access to that information for future systems.

The system will need procedures and tools for moving data from low-to-high and possibly for releasing appropriately classified data from high-to-low without revealing any important information or introducing vulnerabilities to the higher-classification networks.

This colorful printed patch makes you pretty much invisible to AI

In a paper shared last week on the preprint server arXiv, these students show how simple printed patterns can fool an AI system that’s designed to recognize people in images.

As the researchers write: “We believe that, if we combine this technique with a sophisticated clothing simulation, we can design a T-shirt print that can make a person virtually invisible for automatic surveillance cameras.” (They don’t mention it, but this is, famously, an important plot device in the sci-fi novel Zero History by William Gibson.) This may seem bizarre, but it’s actually a well-known phenomenon in the AI world.

They could be used to fool self-driving cars into reading a stop sign as a lamppost, for example, or they could trick medical AI vision systems that are designed to identify diseases.

It doesn’t work against even off-the-shelf computer vision systems developed by Google or other tech companies, and, of course, it doesn’t work if a person is looking at the image.

Artificial Intelligence and Autonomous Systems Legal Update (1Q19)

On February 11, 2019, President Trump signed an executive order (“EO”), titled “Maintaining American Leadership in Artificial Intelligence.”[4]  The purpose of the EO was to spur the development and regulation of artificial intelligence, machine learning and deep learning and fortify the United States’ global position by directing federal agencies to prioritize investments in AI,[5] interpreted by many observers to be a response to China’s recent efforts to claim a leadership position in AI research and development.[6]  Observers particularly noted that many other countries preceded the United States in rolling out national AI strategy.[7]  In an apparent response to these concerns, the Trump administration warned in rolling out the campaign that “as the pace of AI innovation increase around the world, we cannot sit idly by and presume that our leadership is guaranteed.”[8] To secure U.S. leadership, the EO prioritizes five key areas: (1) Investing in AI Research and Development (“R&D”): encouraging federal agencies to prioritize AI investments in their “R&D missions” to encourage “sustained investment in AI R&D in collaboration with industry, academia, international partners and allies, and other non-federal entities to generate technological breakthroughs in AI and related technologies and to rapidly transition those breakthroughs into capabilities that contribute to our economic and national security.”[9] (2) Unleashing AI Resources: making federal data and models more accessible to the AI research community by “improv[ing] data and model inventory documentation to enable discovery and usability” and “prioritiz[ing] improvements to access and quality of AI data and models based on the AI research community’s user feedback.”[10] (3) Setting AI Governance Standards: aiming to foster public trust in AI by using federal agencies to develop and maintain approaches for safe and trustworthy creation and adoption of new AI technologies (for example, the EO calls on the National Institute of Standards and Technology (“NIST”) to lead the development of appropriate technical standards).[11] (4) Building the AI Workforce: asking federal agencies to prioritize fellowship and training programs to prepare for changes relating to AI technologies and promoting Science, Technology, Engineering and Mathematics education.[12] (5) International Engagement and Protecting the United States’ AI Advantage: calling on agencies to collaborate with other nations but also to protect the nation’s economic security interest against competitors and adversaries.[13] AI developers will need to pay close attention to the executive branch’s response to standards setting.  The primary concern for standards sounds in safety, and the AI Initiative echoes this with a high-level directive to regulatory agencies to establish guidance for AI development and use across technologies and industrial sectors, and highlights the need for “appropriate technical standards and reduce barriers to the safe testing and deployment of AI technologies”[14]  and “foster public trust and confidence in AI technologies.”[15]  However, the AI Initiative is otherwise vague about how the program plans to ensure that responsible development and use of AI remain central throughout the process, and the extent to which AI policy researchers and stakeholders (such as academic institutions and nonprofits) will be invited to participate.  The EO announces that the NIST will take the lead in standards setting.  The Director of NIST, shall “issue a plan for Federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies” with participation from relevant agencies as the Secretary of Commerce shall determine.[16]  The plan is intended to include “Federal priority needs for standardization of AI systems development and deployment,” the identification of “standards development entities in which Federal agencies should seek membership with the goal of establishing or supporting United States technical leadership roles,” and “opportunities for and challenges to United States leadership in standardization related to AI technologies.”[17] Observers have criticized the EO for its lack of actual funding commitments, precatory language, and failure to address immigration issues for AI firms looking to retain foreign students and hire AI specialists.[18]  For example, unlike the Chinese government’s commitment of $150 billion for AI prioritization, the EO adds no specific expenditures, merely encouraging certain offices to “budget” for AI research and development.[19]  To begin to close this gap, on April 11, 2019, Congressmen Dan Lipinski (IL-3) and Tom Reed (NY-23) introduced the Growing Artificial Intelligence Through Research (GrAITR) Act to establish a coordinated federal initiative aimed at accelerating AI research and development for U.S. economic and national security.

On March 19, 2019, the White House launched ai.gov as a platform to share AI initiatives from the Trump administration and federal agencies.[21]  These initiatives track along the key points of the AI EO, and ai.gov is intended to function as an ongoing press release.  Presently, the website includes five key domains for AI development: the Executive order on AI, AI for American Innovation, AI for American Industry, AI for the American Worker, and AI with American Values.[22] These initiatives highlight a number of federal government efforts under the Trump administration (and some launched during the Obama administration).  Highlights include the White House’s charting of a Select Committee on AI under the National Science and Technology Council, the Department of Energy’s efforts to develop supercomputers, the Department of Transportation’s efforts to integrate automated driving systems, and the Food and Drug Administration’s efforts to assess AI implementation in medical research.[23] On April 10, 2019, a number of Senate Democrats introduced the Algorithmic Accountability Act, which “requires companies to study and fix flawed computer algorithms that result in inaccurate, unfair, biased or discriminatory decisions impacting Americans.”[24]  The bill stands to be the United States Congress’s first serious foray into the regulation of AI and the first legislative attempt in the United States to impose regulation on AI systems in general, as opposed to regulating a specific activity, such as autonomous vehicles.  While observers have noted congressional reticence to regulate AI in past years, the bill hints at a dramatic shift in Washington’s stance amid growing public awareness for AI’s potential to create bias or harm certain groups.[25] The bill casts a wide net, such that many technology companies would find common practices to fall within the purview of the Act.  The Act would not only regulate AI systems but also any “automated decision system,” which is broadly defined as any “computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision or facilitates human decision making, that impacts consumers.”[26]  This could conceivably include crude decision tree algorithms.  For processes within the definition, companies would be required to audit for bias and discrimination and take corrective action to resolve these issues, when identified.  The bill would allow regulators to take a closer look at any “[h]igh-risk automated decision system”—those that involve “privacy or security of personal information of consumers[,]” “sensitives aspects of [consumers’] lives, such as their work performance, economic situation, health, personal preferences, interests, behavior, location, or movements[,]” “a significant number of consumers regarding race [and several other sensitive topics],” or “systematically monitors a large, publicly accessible physical place[.]”[27]  For these “high-risk” topics, regulators would be permitted to conduct an “impact assessment” and examine a host of proprietary aspects relating to the system.[28]  Additional regulations will be needed to give these key terms meaning but, for now, the bill is a harbinger for AI regulation that identifies key areas of concern for lawmakers.

The AI Now Institute, which examines the social implications of artificial intelligence, recently published a report that examines the scope and scale of the gender and racial diversity crisis in the AI sector and discusses how the use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.  The report includes recommendations for improving workplace diversity (such as publishing harassment and discrimination transparency reports, changing hiring practices to maximize diversity, and being transparent around hiring, compensation, and promotion practices) and recommendations for addressing bias and discrimination in AI systems (such as implementing rigorous testing across the lifecycle of AI systems).[38] In companion bills SB-5527 and HB-1655, introduced on January 23, 2019, Washington State lawmakers drafted a comprehensive piece of legislation aimed at governing the use of automated decision systems by state agencies, including the use of automated decision-making in the triggering of automated weapon systems.[39]  In addition to addressing the fact that eliminating algorithmic-based bias requires consideration of fairness, accountability, and transparency, the bills also include a private right of action.[40]  According to the bills’ sponsors, automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce,[41] and are often unregulated and deployed without public knowledge.[42]  Under the new law, in using an automated decision system, an agency would be prohibited from discriminating against an individual, or treating an individual less favorably than another on the basis of one or more of a list of factors such as race, national origin, sex, or age.[43] Currently, the bills remain in Committee.[44] In the UK, the world’s first Centre for Data Ethics and Innovation will partner with the UK Cabinet Office’s Race Disparity Unit to explore potential for bias in algorithms in crime and justice, financial services, recruitment and local government.[45]  The UK government explained that this investigation was necessary because of the risk that human bias will be reflected in the recommendations used in the algorithms.[46] Police departments often use predictive algorithms for various other functions, such as to help identify suspects.  While such technologies can be useful, there is increasing awareness building with regard to the risk of biases and inaccuracies.[47] In a paper released on February 13, researchers at the AI Now Institute, a research center that studies the social impact of artificial intelligence, found that police across the United States may be training crime-predicting AIs on falsified “dirty” data,[48] calling into question the validity of predictive policing systems and other criminal risk-assessment tools that use training sets consisting of historical data.[49] In some cases, police departments had a culture of purposely manipulating or falsifying data under intense political pressure to bring down official crime rates.  In New York, for example, in order to artificially deflate crime statistics, precinct commanders regularly asked victims at crime scenes not to file complaints.  In predictive policing systems that rely on machine learning to forecast crime, those corrupted data points become legitimate predictors, creating “a type of tech-washing where people who use these systems assume that they are somehow more neutral or objective, but in actual fact they have ingrained a form of unconstitutionality or illegality.”[50] The autonomous vehicle (“AV”) industry continues to expand at a rapid pace, with incremental developments towards full autonomy.  At this juncture, most of the major automotive manufacturers are actively exploring AV programs and conducting extensive on-road testing.  As lawmakers across jurisdictions grapple with emerging risks and the challenge of building legal frameworks and rules within existing, disparate regulatory ecosystems, common challenges are beginning to emerge that have the potential to shape not only the global automotive industry over the coming years, but also broader strategies and policies relating to infrastructure, data management and safety.

One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority

The Chinese government has drawn wide international condemnation for its harsh crackdown on ethnic Muslims in its western region, including holding as ...

Eric Weinstein: Revolutionary Ideas in Science, Math, and Society | Artificial Intelligence Podcast

Eric Weinstein is a mathematician, economist, physicist, and managing director of Thiel Capital. He formed the "intellectual dark web" which is a loosely ...

The Rise of AI

There's an AI revolution sweeping across the world. Yet few people know the real story about where this technology came from and why it suddenly took off.

MD vs. Machine: Artificial intelligence in health care

Recent advances in artificial intelligence and machine learning are changing the way doctors practice medicine. Can medical data actually improve health care?

Artificial Intelligence: Friend or Foe? | David Lee | TEDxValenciaHighSchool

The ubiquity of technology in the 21st century poses a dilemma for the future of all of mankind, one that can and must be addressed now. In the Information Age, ...

2018 Isaac Asimov Memorial Debate: Artificial Intelligence

Isaac Asimov's famous Three Laws of Robotics might be seen as early safeguards for our reliance on artificial intelligence, but as Alexa guides our homes and ...

Computing for the People: Ethics and AI

Melissa Nobles, Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences and a professor of Political Science offers an introduction to a ...

The Public Policy Challenges of Artificial Intelligence

A Conversation with Dr. Jason Matheny Director, Intelligence Advanced Research Projects Activity (IARPA) Eric Rosenbach (Moderator) Co-Director, Belfer ...

AI and Machine Learning in Medicine with Jonathan Chen

Medicine is ripe for applying AI, given the enormous volumes of real world data and ballooning healthcare costs. Professor Chen demystifies buzzwords, draws ...

Evgeny Morozov: The Geopolitics Of Artificial Intelligence

Artificial intelligence has rapidly emerged as a topic of immense interest not just for economists and entrepreneurs but also for observers and practitioners of ...