AI News, Regulatory framework on AI

European Standards support the EU ambitions on Artificial Intelligence

This focus on AI on the part of the European Commission brings forward not only valuable questions for the continued evolution and safe deployment of this technology, but opportunities for ensuring the digital sovereignty of Europe for the future.

Standards are relevant for the evolution of AI for a variety of reasons: Together, standards build trust and boost innovation for all stakeholders: European businesses and SMEs, societal, environmental and policy makers.

For this reason, CEN and CENELEC are ready to support the European Commission in its work related to fostering a safe and sustainable adoption of AI for the well-being of the whole EU.  For more information, please contact Constant KOHLER.

Artificial Intelligence and Automated Systems Legal Update (2Q21)

EDPS Call for Ban on Use of AI for Facial Recognition in Publicly Accessible Spaces ________________________ On June 8, 2021, the U.S. Senate voted 68-32 to approve the U.S. Innovation and Competition Act (S. 1260), intended to grow the boost the country’s ability to compete with Chinese technology by investing more than $200 billion into U.S. scientific and technological innovation over the next five years, listing artificial intelligence, machine learning, and autonomy as “key technology focus areas.”[4]  $80 billion is earmarked for research into AI, robotics, and biotechnology.  Among various other programs and activities, the bill establishes a Directorate for Technology and Innovation in the National Science Foundation (“NSF”) and bolsters scientific research, development pipelines, creates grants, and aims to foster agreements between private companies and research universities to encourage technological breakthroughs.

and (4) continuous monitoring and assessment of the system to ensure reliability and relevance over time.[13] The key monitoring practices identified by the GAO are particularly relevant to organizations and companies seeking to implement governance and compliance programs for AI-based systems and develop metrics for assessing the performance of the system.  The GAO report notes that monitoring is a critical tool for several reasons: first, it is necessary to continually analyze the performance of an AI model and document findings to determine whether the results are as expected, and second, monitoring is key where a system is either being scaled or expanded, or where applicable laws, programmatic objectives, and the operational environment change over time.[14] On May 19, 2021, Senators Rob Portman (R-OH) and Martin Heinrich (D-NM), introduced the bipartisan Artificial Intelligence Capabilities and Transparency (“AICT”) Act.[15]  AICT would provide increased transparency for the government’s AI systems, and is based primarily on recommendations promulgated by the National Security Commission on AI (“NSCAI”) in April 2021.[16]  It would establish a Chief Digital Recruiting Officer within the Department of Defense, the Department of Energy, and the Intelligence Community to identify digital talent needs and recruit personnel, and recommends that the NSF should establish focus areas in AI safety and AI ethics as a part of establishing new, federally funded National Artificial Intelligence Institutes.

The AICT bill was accompanied by the Artificial Intelligence for the Military (AIM) Act.[17]  The AICT Act would establish a pilot AI development and prototyping fund within the Department of Defense aimed at developing AI-enabled technologies for the military’s operational needs, and would develop a resourcing plan for the DOD to enable development, testing, fielding, and updating of AI-powered applications.[18] As we have noted previously, companies using algorithms, automated processes, and/or AI-enabled applications are now squarely on the radar of both federal and state regulators and lawmakers focused on addressing algorithmic accountability and transparency from a consumer protection perspective.[19]  The past quarter again saw a wave of proposed privacy-related federal and state regulation and lawsuits indicative of the trend for stricter regulation and enforcement with respect to the use of AI applications that impact consumer rights and the privacy implications of AI.  As a result, companies developing and using AI are certain to be focused on these issues in the coming months, and will be tackling how to balance these requirements with further development of their technologies.  We recommend that companies developing or deploying automated decision-making adopt an “ethics by design” approach and review and strengthen internal governance, diligence and compliance policies.

On June 15, 2021, Senators Edward Markey (D-Mass.), Jeff Merkley (D-Ore), Bernie Sanders (II-Vt.), Elizabeth Warren (D-Mass.), and Ron Wyden (D-Ore.), and Representatives Pramila Jayapal (D-Wash.), Ayanna Pressley, (D-Mass.), and Rashida Tlaib, (D-Mich.), reintroduced the Facial Recognition and Biometric Technology Moratorium Act, which would prohibit agencies from using facial recognition technology and other biometric tech—including voice recognition, gate recognition, and recognition of other immutable physical characteristics—by federal entities, and block federal funds for biometric surveillance systems.[20]  As we previously reported, a similar bill was introduced in both houses in the previous Congress but did not progress ut of committee.[21] The legislation, which is endorsed by the ACLU and numerous other civil rights organizations, also provides a private right of action for individuals whose biometric data is used in violation of the Act (enforced by state Attorneys General), and seeks to limit local entities’ use of biometric technologies by tying receipt of federal grant funding to localized bans on biometric technology.

 Last year, after clearing the House, the bill did not progress in the Senate after being referred to the Committee on Commerce, Science and Transportation.[24] In June 2021, Senator Kirsten Gillibrand (D-NY) introduced the Data Protection Act of 2021, which would create an independent federal agency to protect consumer data and privacy.[25]  The main focus of the agency would be to protect individuals’ privacy related to the collection, use, and processing of personal data.[26]  The bill defines “automated decisions system” as “a computational process, including one derived from machine learning, statistics, or other data processing or artificial intelligence techniques, that makes a decision, or facilitates human decision making.”[27]  Moreover, using “automated decision system processing” is a “high-risk data practice” requiring an impact evaluation after deployment and a risk assessment on the system’s development and design, including a detailed description of the practice including design, methodology, training data, and purpose, as well as any disparate impacts and privacy harms.[28] The second quarter of 2021 saw new legislative proposals relating to the safe deployment of autonomous vehicles (“AVs”).  As we previously reported, federal regulation of CAVs has so far faltered in Congress, leaving the U.S. without a federal regulatory framework while the development of autonomous vehicle technology advances.  In June 2021, Representative Bob Latta (R-OH-5) again re-introduced the Safely Ensuring Lives Future Deployment and Research Act (“SELF DRIVE Act”) (H.R. 3711), which would create a federal framework to assist agencies and industries to deploy AVs around the country and establish a Highly Automated Vehicle Advisory Council within the National Highway Traffic Safety Administration (“NHTSA”).

Further, NHTSA issued a Standing General Order on June 29, 2021 requiring manufacturers and operators of vehicles equipped with certain automated driving systems (“ADS”)[31] to report certain crashes to NHTSA to enable the agency to exercise oversight of potential safety defects in AVs operating on publicly accessible roads.[32] Finally, NHTSA extended the period for public comments in response to its Advance Notice of Proposed Rulemaking (“ANPRM”), “Framework for Automated Driving System Safety,” until April 9, 2021.[33]  The ANPRM acknowledged that the NHTSA’s previous AV-related regulatory notices “have focused more on the design of the vehicles that may be equipped with an ADS—not necessarily on the performance of the ADS itself.”[34]  To that end, the NHTSA sought input on how to approach a performance evaluation of ADS through a safety framework, and specifically whether any test procedure for any Federal Motor Vehicle Safety Standard (“FMVSS”) should be replaced, repealed, or modified, for reasons other than for considerations relevant only to ADS.  NHTSA noted that “[a]lthough the establishment of an FMVSS for ADS may be premature, it is appropriate to begin to consider how NHTSA may properly use its regulatory authority to encourage a focus on safety as ADS technology continues to develop,” emphasizing that its approach will focus on flexible “performance-oriented approaches and metrics” over rule-specific design characteristics or other technical requirements.[35] On April 21, 2021, the European Commission (“EC”) presented its much-anticipated comprehensive draft of an AI Regulation (also referred to as the “Artificial Intelligence Act”).[36]  It remains uncertain when and in which form the Artificial Intelligence Act will come into force, but recent developments underscore that the EC has set the tone for upcoming policy debates with this ambitious new proposal.  We stand ready to assist clients with navigating the potential issues raised by the proposed EU regulations as we continue to closelymonitor developments in that regard.

Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice

Around the same time other reports announced the withdrawal of child welfare algorithms by several councils [6] and the suspension of the Most Serious Violence predictive system, part of the £10 million Home Office funded National Data Analytics Solution, by West Midlands Police on the advice of its Ethics Committee [7].

diagnostic AI [11], sentencing [12], recruitment [13], loan approval [14] or chatbots designed to address mental health including addressing suicide [15].

The risks to human rights and indeed life, in the case of medical use of AI [16], increases the urgency to find meaningful mechanisms to change the way we invest in, develop and use AI solutions.

well-known example is that of a health care risk-prediction algorithm used on more than 200 million US citizens to identify patients who would benefit from “high-risk care management program”

However, it has recently been reported [19] by a JCVI committee member that the algorithm was likely to underestimate the risk to vulnerable people suffering from rare disease, particularly younger patients.

The committee member also pointed out that the datasets used to train the model may have other significant omissions due to some groups effectively shielding and not being exposed to the virus.

Although this bias has not been verified it does reveal the importance of transparency and understanding of how such algorithms work if they are to be used to drive healthcare policy.2 Indeed, it is not the first algorithm being used to determine vaccine policy to be questioned in this way.

In response to these issues we have seen a significant number of High Level AI Principles (outlined later), frameworks [20] and standards being developed for example IEEE P7010 Transparency of Autonomous Systems [21] from the IEEEE P7000 series [22] and ISO/IEC JTC 1/SC 42 Artificial Intelligence [23].

Hence, we still see repeated investment in and use of systems that impact negatively on individuals and groups, despite several government ethics advisory boards and procurement guidelines [24].

The main training pipelines and education routes that an AI developer might take do not have benchmark subject statements even as of 2019 [28] which demonstrates a significant gap in addressing data science or AI in Higher Education.

This therefore leaves the challenge of trying to educate developers and modellers after the fact in a process that, without statutory legislation or professional requirements, is occurring slower than the rate of technological development.

The result of this myriad of issues is the obvious human cost, that we have outlined thus far, plus the significant financial loss to public funds and reputational damage due to the withdrawal of expensive and harmful AI solutions.

The report stressed the importance of public funds used to invest in major challenge areas (identified by Innovate UK and ESPRC) such as personalised and integrated health care.

There are many examples of funding calls having ethical requirements (these are outlined later) and additional monitoring, for example value for money is usually taken into account in the funding process.

The statement would outline the actions planned by applicants to ensure their project and/or product can be deemed trustworthy and benchmarked against the rigorous standards.

However, small changes within the operational ecosystem funding for AI will provide the nudge technique that is needed to start to circumvent the problems outlined throughout this paper.