AI News, 2020 ENVIRONMENTAL, SOCIAL AND GOVERNANCE REPORT.pdf artificial intelligence

Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice

Around the same time other reports announced the withdrawal of child welfare algorithms by several councils [6] and the suspension of the Most Serious Violence predictive system, part of the £10 million Home Office funded National Data Analytics Solution, by West Midlands Police on the advice of its Ethics Committee [7].

diagnostic AI [11], sentencing [12], recruitment [13], loan approval [14] or chatbots designed to address mental health including addressing suicide [15].

The risks to human rights and indeed life, in the case of medical use of AI [16], increases the urgency to find meaningful mechanisms to change the way we invest in, develop and use AI solutions.

well-known example is that of a health care risk-prediction algorithm used on more than 200 million US citizens to identify patients who would benefit from “high-risk care management program”

However, it has recently been reported [19] by a JCVI committee member that the algorithm was likely to underestimate the risk to vulnerable people suffering from rare disease, particularly younger patients.

The committee member also pointed out that the datasets used to train the model may have other significant omissions due to some groups effectively shielding and not being exposed to the virus.

Although this bias has not been verified it does reveal the importance of transparency and understanding of how such algorithms work if they are to be used to drive healthcare policy.2 Indeed, it is not the first algorithm being used to determine vaccine policy to be questioned in this way.

In response to these issues we have seen a significant number of High Level AI Principles (outlined later), frameworks [20] and standards being developed for example IEEE P7010 Transparency of Autonomous Systems [21] from the IEEEE P7000 series [22] and ISO/IEC JTC 1/SC 42 Artificial Intelligence [23].

Hence, we still see repeated investment in and use of systems that impact negatively on individuals and groups, despite several government ethics advisory boards and procurement guidelines [24].

The main training pipelines and education routes that an AI developer might take do not have benchmark subject statements even as of 2019 [28] which demonstrates a significant gap in addressing data science or AI in Higher Education.

This therefore leaves the challenge of trying to educate developers and modellers after the fact in a process that, without statutory legislation or professional requirements, is occurring slower than the rate of technological development.

The result of this myriad of issues is the obvious human cost, that we have outlined thus far, plus the significant financial loss to public funds and reputational damage due to the withdrawal of expensive and harmful AI solutions.

The report stressed the importance of public funds used to invest in major challenge areas (identified by Innovate UK and ESPRC) such as personalised and integrated health care.

There are many examples of funding calls having ethical requirements (these are outlined later) and additional monitoring, for example value for money is usually taken into account in the funding process.

The statement would outline the actions planned by applicants to ensure their project and/or product can be deemed trustworthy and benchmarked against the rigorous standards.

However, small changes within the operational ecosystem funding for AI will provide the nudge technique that is needed to start to circumvent the problems outlined throughout this paper.

News Insight

We invite you to find out more about Perkins Coie through our press releases, blogs, updates, publications and media coverage.