AI News, The Centre for Data Ethics and Innovation's approach to the ... artificial intelligence

ARTIFICIAL INTELLIGENCE UPDATE 22 Aug 2019 Making Artificial Intelligence Ethical

It promises to help solve global challenges like climate change, making driving safer, transform wildlife conservation and provide us with access to quality medical care.

But as the technology continues to expand its role in our lives, important questions have emerged: While private and public sectors experiment with AI, complex social, ethical, legal and political questions emerge around AI bias, privacy, 'blackbox AI' and the use of lethal autonomous weapons.

In Canberra (Thursday, 29 August) and Sydney (Friday, 30 August), two world-renowned UK artificial intelligence and data ethics experts, Dr Mariarosaria Taddeo and Roger Taylor, will be sharing their knowledge of the UK's experience in developing regulation and governance around AI and data ethics.

Regulation of Artificial Intelligence and Big Data in the UK

As the seat of the first Industrial Revolution, the UK has a long history of designing regulatory solutions to the challenges posed by technological change.

In the first of these reports, entitled "The Big Data Dilemma", the Committee proposed a body with the remit to address "the growing legal and ethical challenges associated with balancing privacy, anonymisation, security and public benefit"[4] .

As set out in the Industrial Strategy, its overriding purpose is to "review the existing governance landscape and advise the government on how we can enable and ensure ethical, safe and innovative uses of data, including AI".

It is important both because it exhibits the greatest overlap of subject matter with algorithmic decision-making by AI, and because the ICO is one of the few regulators whose remit extends to other branches of Government, and therefore has the ability to regulate uses of AI in the public as well as the private sector.

This is also true for the UK Equality and Human Rights Commission, Competition and Markets Authority, Office of Communications and a range of other sector regulators whose remit - and existing array of regulatory tools - provides them with the power to intervene when the use of AI affects citizens or consumers within the territory covered by their statutory powers.

The question is whether those regulators will have the institutional capacity and expertise to use those powers in respect of AI, or will sufficiently prioritise doing so against the competing demands on their limited resources.

Although their report will have significant weight, and even if its recommendations were to be immediately accepted by the Government (which is far from certain), it would be at least an additional two or three more years before legislation to implement them could begin to find its way onto the statute book.

While none of these bodies has the power to legislate to fill regulatory gaps that emerge, they may be expected, over time, to identify issues that Government, or existing regulatory bodies, will then be under pressure to address.

The current landscape involves pressing into service existing regulators to use their powers - none of which were designed to address the specific issues raised by AI - as the need arises, while at the same time creating new institutional capacity (in the form of the CDEI) to keep the area under review, and subjecting specific important use-cases (like AVs) to a more detailed process of policy consideration.

However, all things considered, it is hard to avoid the truth of the judgment expressed by Jacob Turner that, despite the amount of fine words expressed on the subject, with respect to the UK's regulation of AI, "specific policy developments remain elusive"[10].

AI, GDPR and the limits of automated processing

AI, GDPR and the limits of automated processing. CogX 2019 Kathryn Corrick; Data Privacy Specialist Corrick, Wales and Partners LLP Simon McDougall; ...

Control and Responsible Innovation of Artificial Intelligence

The following event is in partnership with The Hastings Center. Artificial Intelligence is beginning to transform nearly every sector and every facet of modern life.

An ecosystem of trust: Singapore's AI Governance and Ethics Approach

Singapore's groundbreaking work in Artificial Intelligence (AI) Governance and Ethics has won a top award at the prestigious World Summit on the Information ...

How Microsoft is advancing manufacturing innovation with AI

Discover Microsoft's vision to democratize artificial intelligence (AI), and how Microsoft is helping manufacturers use AI to drive innovation. Take the AI readiness ...

Artificial Intelligence Colloquium: AI for Software Engineering

Speaker: Dr. Sandeep Neema, Program Manager, DARPA / Information Innovation Office Despite the tremendous resources devoted to making software more ...

Ethics and Bias in Artificial Intelligence - 18th Vienna Deep Learning Meetup

The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and ...

Emma Martinho Truswell (UK) at Ci2019 - What everyone in your organisation needs to know about AI

"What everyone in your organisation needs to know about AI, and why AI means we need to change the way we lead" ...

Artificial intelligence poses ethical challenge to tech industry

CNET senior producer Dan Patterson reports on some of the technological and ethical challenges facing the rapidly growing field of artificial intelligence.

Data, AI and Algorithms (CXOTalk #270)

Myths and hype surround many discussions about artificial intelligence, big data, and modern algorithms. For this episode of CXOTalk, host Michael Krigsman, ...

Ethics of AI

Detecting people, optimising logistics, providing translations, composing art: artificial intelligence (AI) systems are not only changing what and how we are doing ...