AI News, White House Proposes Binding AI Principles For Regulators artificial intelligence

Consultation on the OPC’s Proposals for ensuring appropriate regulation of artificial intelligence

The June 2019 G20 Ministerial Statement on Trade and Digital Economy committed to a human-centered approach to AI, recognizing the need to continue to promote the protection of privacy and personal data consistent with applicable frameworks.Footnote 1 As well, a 2019 report by Deloitte cautions that “business and government may not have much time to act to address the perceived risks of AI before Canadians definitively turn against the new technology.”Footnote 2 Based on our own assessment, AI presents fundamental challenges to all foundational privacy principles as formulated in PIPEDA.

Some have pointed out that AI systems generally rely on large amounts of personal data to train and test algorithms, alleging that limiting some or any of the data could lead to reduced quality and utility of the output.Footnote 3 As for another example, some have observed that organizations relying on AI for advanced data analytics or consequential decisions may not necessarily know ahead of time how the information processed by AI systems will be used or what insights they will discover.Footnote 4 This has led some to call into question the practicality of the purpose specification principle, that requires on the one hand “specifying purposes” to individuals at the time of collecting their information and, on the other, “limiting use and disclosure” of personal information to the purpose for which it was first collected.Footnote 5 To echo the words of the late Ian Kerr, former Canada Research Chair in Ethics, Law, and Technology, and former member of Canada’s Advisory Council on Artificial Intelligence, “we stand on the precipice of a society that increasingly interacts with machines, many of which will be more akin to agents than mere mechanical devices.

In its 2017 guidance, Big data, artificial intelligence, machine learning and data protection, the UK Information Commissioner’s Office (ICO) distinguishes between the key terms of AI, machine learning and big data analytics, noting they are often used interchangeably but have subtle differences.Footnote 9 For example, the ICO refers to AI as a key to unlocking the value of big data, machine learning as one of the technical mechanisms that facilitates AI, and big data analytics as the sum of both AI and machine learning processes.

The test every law must go through is whether it is in line with fundamental rights, whether it is not in contradiction with the principle of democracy, thus in particular whether it has been adopted in a legitimate procedure, and whether it complies with the principle of the rule of law, thus is not in contradiction to other pre-existing law, sufficiently clear and proportional to the purpose pursued.Footnote 10 The purpose of the law ought to be to protect privacy in the broadest sense, understood as a human right in and of itself, and as foundational to the exercise of other human rights.

This human rights based approach is consistent with the recent 2019 Resolution of Canada’s Federal, Provincial and Territorial Information and Privacy Commissioners, which notes that AI and machine learning technologies must be “designed, developed and used in respect of fundamental human rights, by ensuring protection of privacy principles such as transparency, accountability, and fairness.”Footnote 11 Likewise, the 40th International Conference of Data Protection and Privacy Commissioners (ICDPPC) resolution on AI (2018) affirms that “any creation, development and use of artificial intelligence systems shall fully respect human rights, particularly the rights to the protection of personal data and to privacy, as well as human dignity, non-discrimination and fundamental values.”Footnote 12 The ICDPPC’s recent Resolution on Privacy as a Fundamental Human Right and Precondition for Exercising Other Fundamental Rights (2019) reaffirms a strong commitment to privacy as a right and value in itself, and calls for appropriate legal protections to prevent privacy breaches and impacts to human rights given advancements of new technologies like AI.Footnote 13 In order to ensure the protection of rights, we are of the view that PIPEDA should be given a rights-based foundation that recognizes privacy in its proper breadth and scope, and provides direction on how the rest of the Act’s provisions should be interpreted.

Note that Article 21 of the GDPR permits individuals the right to object to any profiling or other processing that is carried out on the basis of legitimate interests or on the basis of a task carried out in the public interest or official authority.Footnote 14 If a right to object to such processing is exercised, it may continue only if it can be shown that there is a compelling reason to continue the processing that overrides the individual’s interests, rights and freedoms or for the establishment, exercise or defence of legal claims.

This should include the consequences of such reasoning.”Footnote 16 In Europe, there is debate about the interpretation of the GDPR with respect to whether the law requires explanation of system functionality or the rationale for the logic, significance and consequences of specific decisions.Footnote 17 France and Hungary are among the EU Member States that guarantee a right to legibility/explanation about algorithmic decisions in their national data protection legislation.Footnote 18 For instance, the law in France provides that data subjects have the right to obtain from the controller information about the logic involved in algorithm-based processing.Footnote 19 The Government of Canada has expressed its support for algorithmic transparency.Footnote 20 In its PIPEDA white paper, the federal department of Innovation, Science and Economic Development Canada (ISED) proposes amending the law to provide for more meaningful controls and increased transparency to individuals as it relates to AI.

They suggest that a reformed PIPEDA should include “informing individuals about the use of automated decision-making, the factors involved in the decision, and where the decision is impactful, information about the logic upon which the decision is based.”Footnote 21 We believe the openness principle of PIPEDA should include a right to explanation that would provide individuals interacting with AI systems the reasoning underlying any automated processing of their data, and the consequences of such reasoning for their rights and interests.

In its Guidelines on Artificial Intelligence and Data Protection, the Council of Europe’s Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data states that: In all phases of the processing, including data collection, AI developers, manufacturers and service providers should adopt a human rights by-design approach and avoid any potential biases, including unintentional or hidden, and the risk of discrimination or other adverse impacts on the human rights and fundamental freedoms of data subjects.Footnote 23 “Data Protection by Design and by Default” is the meaningful title of Article 25 of the GDPR, which applies more broadly than only to AI systems.

For example, in discussing data minimization techniques in AI systems, the UK Information Commissioner’s Office (ICO) notes that “the fact that some data might later in the process be found to be useful for making predictions is not enough to establish its necessity for the purpose in question, nor does it retroactively justify its collection, use or retention.”Footnote 28 The UK ICO further notes that data can also be minimized during the training phase based on the assumption that “not all features included in a dataset will necessarily be relevant to the task.”Footnote 29 The Norwegian Data Protection Authority suggests that proactively considering data minimization supports the desirable goal of proportionality, which requires consideration of how to achieve the objective of the AI processing in a way that is the least invasive for the individual.Footnote 30 Canadian Parliamentary Committee reporting validates the merits of the principle of data minimization in the context of ethics and AI.

Specifically, in June 2019, the ETHI Committee recommended that the government modernize Canada’s privacy laws and commit “to uphold data minimization, de-identification of all personal information at source when collected for research or similar purpose and clarify the rules of consent regarding the exchange of personal information between government department and agencies.”Footnote 31 Purpose specification and data minimization remain complex issues and the potential challenges in adhering to these legal principles in an AI context merit discussing whether there is reason to explore alternative grounds for processing.

In other laws, such as the GDPR, consent is only one legal ground for processing among many.Footnote 32 Alternative grounds for processing under the GDPR include when processing is necessary for the performance of a task carried out in the public interest, and when the processing is necessary for the purposes of the “legitimate interests” pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject (in particular where the data subject is a child).

For example, Australia’s Privacy Act 1988 will not apply to information that has undergone de-identification so long as there is no reasonable likelihood of re-identification occurring.Footnote 35 Similarly, Hong Kong’s privacy law will not consider data that is anonymized personal so long as the individuals concerned cannot be directly or indirectly identified.Footnote 36 Japan’s regime differs substantially in that its Act on the Protection of Personal Information applies to the category of “anonymously processed information,” and sets out obligations for organizations that anonymize data and/or use anonymized data (including notice).Footnote 37 Under this Act, consent is not required for use or disclosure of anonymously processed data.

For example, the OECD Principles on Artificial Intelligence state that “AI actors should ensure traceability, including in relation to datasets, processes and decisions made during the AI system lifecycle, to enable analysis of the AI system’s outcomes and responses to inquiry, appropriate to the context and consistent with the state of art.”Footnote 40 The Institute of Electrical and Electronics Engineers (IEEE) notes that: Technologists and corporations must do their ethical due diligence before deploying A/IS [Artificial Intelligence Systems] technology… Similar to a flight data recorder in the field of aviation, algorithmic traceability can provide insights on what computations led to questionable or dangerous behaviors.

Data lineage can be represented visually to trace how the data moves from its source to its destination, how the data gets transformed along the way, where it interacts with other data, and how the representations change.” It explains a “data provenance” record as allowing “an organisation to ascertain the quality of the data based on its origin and subsequent transformation, trace potential sources of errors, update data, and attribute data to their sources.” France’s data protection authority (the Commission nationale de l'informatique et des libertés—CNIL), has recommended the development of a “national platform” for algorithmic auditing.Footnote 43 This proposal is in line with the proposed Algorithmic Accountability Act (AAA), which would give the US Federal Trade Commission (FTC) new powers to require companies to assess their machine learning systems for bias and discrimination.Footnote 44 Regulations to be adopted by the FTC within two years of the coming into force of the law would require organizations to conduct automated decision impact assessments and data protection impact assessments, “if reasonably possible,” in consultation with third parties, including independent auditors and independent technology experts.

They could challenge mischaracterizations and erroneous inferences that led to their scores.”Footnote 46 As well, ISED’s PIPEDA reform paper recommends “Ensuring the accuracy and integrity of information about an individual throughout the chain of custody by requiring organizations to communicate changes or deletion of information to any other organization to whom it has been disclosed.”Footnote 47 In considering these expert views and given the importance of being able to trace, analyze and validate AI system outcomes for individuals to be able to avail themselves of existing access and correction rights and also improved human rights protections under a reformed PIPEDA, we recommend the inclusion of an algorithmic traceability requirement for AI systems.

The International Technology Law Association’s Responsible AI: a Global Policy Framework aptly captures why humans must remain responsible: even if AI might force us to reconsider the accountability of certain actors, it should be done in a way that shifts liability to other human actors and not to the AI systems themselves (…) Holding AI systems directly liable runs the risk of shielding human actors from responsibility and reducing the incentives to develop and use AI responsibly.Footnote 50 The significant risks posed to privacy and human rights by AI systems demand a proportionally strong regulatory regime.

Artificial Intelligence and Machine Learning in Software as a Medical Device

Artificial intelligence and machine learning technologies have the potential to transform health care by deriving new and important insights from the vast amount of data generated during the delivery of health care every day.

real-world examples of artificial intelligence and machine learning technologies include: Adaptive artificial intelligence and machine learning technologies differ from other software as a medical device (SaMD) in that they have the potential to adapt and optimize device performance in real-time to continuously improve health care for patients.

The ideas described in the discussion paper leverage practices from our current premarket programs and rely on IMDRF’s risk categorization principles, the FDA’s benefit-risk framework, risk management principles described in the software modifications guidance, and the organization-based total product lifecycle approach (also envisioned in the Digital Health Software Precertification (Pre-Cert) Program).

This plan would include the types of anticipated modifications—referred to as the “Software as a Medical Device Pre-Specifications”—and the associated methodology being used to implement those changes in a controlled manner that manages risks to patients —referred to as the “Algorithm Change Protocol.” In this approach, the FDA would expect a commitment from manufacturers on transparency and real-world performance monitoring for artificial intelligence and machine learning-based software as a medical device, as well as periodic updates to the FDA on what changes were implemented as part of the approved pre-specifications and the algorithm change protocol.

What the SFO Looks For in a Compliance Programme—New Guidance Published January 2020

Background On 17 January 2020, the Serious Fraud Office (“SFO”) published the latest chapter from its internal Operational Handbook (the “Handbook”, directed at SFO staff) entitled “Evaluating a Compliance Programme”

To ensure the limitations of the Guidance, the preamble notes it is not published for the purpose of providing legal advice and “should not therefore be relied on as the basis for any legal advice or decision”.

In particular, the SFO considers the following: Investigating a Compliance Programme While the DOJ ECCP and the Guidance both focus on their respective assessments of a company’s compliance programme, the Guidance expands on the investigation process, including the sources of information they may request, and the tools they may use at different stages of an investigation.

The Guidance also advises SFO investigators that compliance issues be “considered as part of the overall investigation strategy”, highlighting the relative importance of compliance programme assessment as a necessary element of an SFO investigation.

The Guidance states that, as individual cases differ, it does not prescribe a particular approach and that it is important for SFO investigators to “maintain [an] open investigative mind-set, testing and corroborating evidence from a number of sources”.

Similar to the ECCP, the Guidance indicates that the principles are not prescriptive, but flexible and outcome-focused, and summarises the Six Principles as follows: Overall, the new Guidance is a welcome addition to the limited selection of information available regarding the UK Bribery Act, and provides companies with some additional insight into the overall framework through which SFO investigators will evaluate company compliance programmes.

The Guidance is most useful in providing greater insight into specific time periods, tools, and methodology that the SFO will focus on when assessing a company’s procedures, and in explaining the different rationales behind evaluation of compliance programmes at different stages.

As Bribery Act enforcement in the UK continues to evolve, and the SFO gains more experience in reviewing and evaluating compliance programmes, it will have further opportunities to provide more complex and sophisticated analyses of compliance programmes and more insight into its expectations of “adequate procedures”.

Stanford HAI 2019 Fall Conference - AI Global Governance (CS+Social Good)

Phil Beaudoin, SVP Research Group and Co-Founder, Element AI Peter Cihon, Research Affiliate, Center for the Governance of AI Future of Humanity Institute, ...

Stanford HAI 2019 Fall Conference - Fairness of AI in the Provision of Legal Services

Nicolas Economou, Chairman and Chief Executive Officer, H5; Chair, Law Initiative, The Future Society; Chair, Law Committee, IEEE Global Initiative Sharad ...

The Future of Work

For more on this event, visit: November 26, 2019 | Today's world economies face a growing array of challenges that present both ..

Day 1: Uber Elevate Summit 2019 | Uber

Watch the Day 2 stream here: Watch more from the Uber Elevate Summit: ..

Livestream Day 2: Stage 2 (Google I/O '18)

This livestream covers all of the Google I/O 2018 day 2 sessions that take place on Stage 2. Stay tuned for technical sessions and deep dives into Google's latest ...

2018 Building a Bridge to Credit Visibility Symposium — consumerfinance.gov

The transcript can be downloaded here: ...

Thorium.

Thorium is an abundant material which can be transformed into massive quantities of energy. To do so efficiently requires a very ..

Live: Mark Zuckerberg Testimony to Senate Judiciary and Commerce Committees

Mark Zuckerberg is scheduled to appear before a joint hearing of the Senate Judiciary and Commerce Committees today.The hearing will focus on the use of ...

Seattle City Council: Select Budget Committee 10/2/19 Session II

Agenda: LEAD Community Panel; LEAD Department Panel; TNC "Fare Share" Plan; Office of Sustainability and Environment (OSE); Public Comment. Advance ...

Consumer Protection in an Age of Uncertainty - Day 2

Consumer Protection in an Age of Uncertainty - Day 2.