AI News, Regulation of Artificial Intelligence artificial intelligence

Controlling Skynet: Regulatory Risks in Artificial Intelligence and Big Data

These regulators have focused, to date, on questions regarding process transparency, error correction, privacy concerns, and internalized biases, even as they see promise in AI and big data’s ability to reduce lending risk and/or open credit markets to previously underserved populations.

At the same time, the GAO has issued two reports (in March 2018 and December 2018) promoting or recommending interagency coordination on flexible regulatory standards for nascent financial technology (“Fintech”) business models (including through “regulatory sandboxes”) and the use of alternative data in underwriting processes.

Various state Attorneys General, for example, have joined the discussion by opposing revisions to the CFPB’s policy on no-action letters due, in part, to concern over the role machine learning could play in replacing certain forms of human interaction in overseeing underwriting questions such as “what data is relevant to a creditworthiness evaluation and how each piece of data should be weighted.”

In addition, the New York Department of Financial Services (“NYDFS”) has moved perhaps as far as any regulator—albeit in the context of life insurance, rather than banking or consumer finance—by issuing two guiding principles on the use of alternative data in life insurance underwriting: (i) that insurers must independently confirm that the data sources do not collect or use prohibited criteria;

and a study by the Federal Deposit Insurance Corporation (“FDIC”) noted that one in five financial institutions cited profitability as a major obstacle to serving underbanked consumers, but that new technologies may enable consumers whose traditional accounts are closed for profitability issues to continue to have access to financial services.

As financial institutions increase their use of AI in marketing, underwriting, and account management activities, decision-making that is removed from—or at least less comprehensively controlled by—human interaction raises the risk of discrimination in fact patterns that courts and regulators have not previously addressed.

With respect to federal consumer financial laws, ECOA prohibits a person from discriminating against an applicant on a prohibited basis regarding any aspect of a credit transaction or from making statements that would discourage on a prohibited basis a reasonable person from making or pursuing a credit application.

While such laws frequently protect similar classes as federal fair lending requirements do, some states add protected classes such as military servicemembers, or expressly protect consumers on the basis of sexual orientation in a manner that may only be implied by federal fair lending requirements.

At a November 2018 Fintech conference on the benefits of AI, for example, Lael Brainard, a member of the FRB, noted that firms view artificial intelligence as having superior pattern recognition ability, potential cost efficiencies, greater accuracy in processing, better predictive power, and improved capacity to accommodate large and unstructured data sets, but cautioned that AI presents fair lending and consumer protection risks because “algorithms and models reflect the goals and perspectives of those who develop them as well as the data that trains them and, as a result, artificial intelligence tools can reflect or ‘learn’ the biases of the society in which they were created.”

Brainard cited the example of an AI hiring tool trained with a data set of resumes of past successful hires that subsequently developed a bias against female applicants because the data set that was used predominantly consisted of resumes from male applicants.

In a white paper, “Opportunities and Challenges in Online Marketplace Lending,” the Treasury Department recognized this same risk, noting that data-driven algorithms present potential risk of disparate impact in credit outcomes and fair lending violations, particularly as applicants do not have the opportunity to check and correct data points used in the credit assessment process.

Some of the lenders surveyed tested their credit models for accuracy, and all discussed testing to control for fair lending risk.” Even in the absence of discriminatory intent or outcomes, AI may complicate compliance with technical aspects of federal and state fair lending requirements.

Adverse action notices must contain either a statement of specific reasons for the action taken or a disclosure of the applicant’s right to a statement of specific reasons taken within 30 days if the statement is requested within 60 days of the creditor’s notification.

Financial institutions using less transparent AI systems may find it difficult to populate an appropriate list of reasons for adverse action and those with more transparent AI systems may find themselves responding to consumer inquiries or complaints about credit decisions made on seemingly irrelevant data points over which an AI happened to find a correlation with default rates or other material considerations.

(FCRA also requires users of consumer reports to issue adverse action notices that include specific disclosures regarding numeric credit scores when such scores are used in deciding to take adverse action.) FCRA: When is “Big Data” a “Consumer Report?” Big data also presents risks under FCRA, and such risks are amplified if AI-driven underwriting systems have access to alternative data sources without the establishment of proper controls restricting the use of particular data elements.

Except as expressly exempted, a “consumer report” under FCRA is “the communication of any information by a consumer reporting agency bearing on a consumer’s creditworthiness, credit standing, credit capacity, character, general reputation, personal characteristics, or mode of living which is used or expected to be used or collected in whole or in part for determining a consumer’s eligibility for credit, employment purposes, or any other purposes enumerated in the statute.”

(The term “consumer reporting agency” somewhat circularly includes most parties who provide “consumer reports” on a for profit or a cooperative non-provider basis, so the fact that a data source does not consider itself to be a “consumer reporting agency” is not necessarily relevant to a financial institution’s obligations when using alternative data.)

Entities that use AI algorithms for credit decisions may have difficulty providing information required in FCRA adverse action notices (such as the specific source of the consumer report and the factors affecting any credit scoring model used in underwriting credit) when it is unclear what data points comprise of the consumer report.

A consumer reporting agency is subject to specific legal obligations, such as obtaining certain certifications from users of consumer reports, ensuring the accuracy of consumer information, investigating consumer disputes of inaccurate information, and filtering out certain items that cannot be reported.

If the data used reflects on FCRA-regulated characteristics (e.g., the consumer’s creditworthiness, credit standing, reputation, etc.) such that its use in credit underwriting renders the information a “consumer report,” the false representation to the data source may be a false certification to a consumer reporting agency for the purpose of obtaining a consumer report.

For example, the FTC and FDIC have pursued an enforcement action against a provider of credit cards to consumers with poor credit histories for alleged violations, including a UDAP prohibition for failing to disclose to consumers that certain purchases that triggered the company’s risk algorithm could reduce the consumer’s credit limit.

As black box AI systems become more prevalent, and such systems may train themselves to use novel algorithms and approaches to underwriting and account management, financial institutions may want to consider the need for broader disclaimers regarding the factors that may impact credit decisions and/or the processes that may develop new approaches to creditworthiness analysis altogether.

Finally, beyond direct concerns as to violations of law and control of risk by financial institutions themselves, regulators have expressed interest in limiting the risk that financial institutions expose themselves and/or consumers through partnerships with vendors who may rely on AI or big data processes.

More concretely, NYDFS has taken the position that an insurer “may not rely on the proprietary nature of a third-party vendor’s algorithmic process to justify the lack of specificity related to an adverse underwriting action,” and that expectation to understand a vendor’s AI models could also apply to the context of credit underwriting.

For example, the FDIC guidance discusses risks that may be associated with third-party lending arrangements, as well as its expectation that financial institutions implement a process for evaluating and monitoring vendor relationships that include risk assessment, due diligence, contract structuring and review, and oversight.

‘Adverse impacts’ of Artificial Intelligence could pave way for regulation, EU report says

The EU should consider the need for new regulation to “ensure adequate protection from adverse impacts”

consequences include biometric recognition, the use of lethal autonomous weapons systems (LAWS), AI systems built on children’s profiles, and the impact AI may have on fundamental rights.

However, the EU’s Digital Commissioner Mariya Gabriel downplayed talk of a hard regulatory environment for AI on Wednesday (26 June), telling EURACTIV that the Commission agrees with the report’s general recommendation to avoid “prescriptive regulation”

EURACTIV heard from sociologist and member of the Alliance, Mona Sloane, who said that the narrative being spun by the Commission, that regulation could effectively stifle innovation, was a message that is not entirely being delivered in good faith.

Along this axis, Ursula Pachl, member of the High-Level Group and Deputy Director General of  BEUC, the European Consumer Organisation, said that the framing of the EU’s Artificial Intelligence debate has been skewed because “the discussion on ethics is often used as an overall cover to distract us from the core issue, which is to define the right legal framework.”

Feedback will be collected on the effectiveness of the implementations until early December, with a view to producing a revised set of recommendations in early 2020, which may then impact any prospective decision to introduce a regulatory environment for AI.

BankThink Before they regulate AI, Congress needs to define it

The emerging debate on machine learning and artificial intelligence can sometimes sound like a science fiction externality, destined to rise in complexity until it fights humans for supremacy.

The Senate Banking Committee recently met to hear concerns about “data brokers and the impact on financial data privacy, credit, insurance, employment and housing,” where the impact of machine learning and AI was prominently featured.

While any constructive discussion to better understand these technologies is commendable, Congress and regulators need to first agree on definitions for AI and machine learning before having meaningful debates on the risks, benefits and impending future regulations.

Once research identifies a potential correlation, data scientists run rigorous regressions to observe the data attributes and build robust training models in search of viable explanations for the link.

The resulting model is run past a committee, legal counsel and validated by the sponsor bank to make sure it meets logical and ethical standards as well as the strict legal requirements for credit underwriting.

Regulators and lawmakers should encourage testing and learning in regulatory sandboxes — instead of provoking anecdotal fear and AI panic — especially as AI and machine learning continue to develop in the current state.

AI should be seen less as “intelligence” and more as “automated insights.” Considering these risks as industry guidelines are created for AI is critical, but those rules won’t be created by distant officials who struggle to properly define it.

Artificial intelligence: Should AI be regulated by governments?

As artificial intelligence becomes more integrated with every aspect of society, concerns inspired by Hollywood sci-fi stoke fears over its rise. Experts have ...

Regulating Artificial Intelligence: How to Control the Unexplainable

The technologies we broadly call "AI" are changing industries, from finance to advertising, medicine and logistics. But the biggest hurdle to the adoption of ...

Elon Musk on Artificial Intelligence and Government Regulation

From the ISS R&D Conference on July 19, 2017, Elon Musk was asked by a government worker how they should go about tackling artificial intelligence.

Elon Musk calls for regulation of artificial intelligence

Mercatus Center senior fellow Adam Thierer on Elon Musk's warning about artificial intelligence.

Yuval Noah Harari on humanity’s divine potential and an AI arms race

Sapiens author Yuval Noah Harari discusses his new book '21 Lessons for the 21st Century', describing the threat of an artificial intelligence arms race and the ...

Are We Too Late To Regulate or Ban Artificial Intelligence?

Computer scientist Fernando Diaz highlights how regulation of A.I. could influence the everyday use of Artificial Intelligence and it's impact on the whole of ...

Artificial Intelligence: GDPR and beyond - Dr. Sandra Wachter, University of Oxford

Dr. Sandra Wachter is a lawyer and Research Fellow in Data Ethics, AI, robotics and Internet Regulation/cyber-security at the Oxford Internet Institute and the ...

The Real Reason to be Afraid of Artificial Intelligence | Peter Haas | TEDxDirigo

A robotics researcher afraid of robots, Peter Haas, invites us into his world of understand where the threats of robots and artificial intelligence lie. Before we get ...

Artificial intelligence: dream or nightmare? | Stefan Wess | TEDxZurich

This talk was given at a local TEDx event, produced independently of the TED Conferences. Artificial intelligence (AI) is a huge dream and vision for all mankind, ...

Elon Musk: We should regulate AI to keep public safe

At the International Astronautical Congress, SpaceX CEO Elon Musk talks about the rapid advancements in AI and what the government's role should be in ...