AI News, Moral Dilemmas for Artificial Intelligence: a position paper on an ... artificial intelligence

AI Safety: Charting out the High Road

This past year, revelations about the plight of Muslim Uighurs in China have come to light, with massive-scale detentions and human rights violations of this ethnic minority of the Chinese population.

Last month, additional classified Chinese government cables revealed that this policy of oppression was powered by artificial intelligence (AI): that algorithms fueled by massive data collection of the Chinese population were used to make decisions regarding detention and treatment of individuals.

With the United States, Russia, and China all signaling that AI is a transformative technology central to their national security strategy, with their militaries planning to move ahead with military applications of AI quickly, should this development raise the same kinds of concerns as China’s use of AI against its own population?

In an era where Russia targets hospitals in Syria with airstrikes in blatant violation of international law, and indeed of basic humanity, could AI be used unethically to conduct war crimes more efficiently?

In an era where the technology of AI can so easily be exploited by governments to violate the principles of humanity, the United States can demonstrate the high road is possible, but to do so it needs to keep its promises: to address safety risks intrinsic to AI and to search for ways to use AI for good.

For example, all military systems are subject to test and evaluation activities to ensure that they are reliable and safe, as well as legal reviews to ensure they are consistent with international humanitarian law (e.g., the Geneva Conventions).

The U.S. military has been busy supporting the Defense Innovation Board’s development of AI ethics, with the Joint AI Center also emphasizing the critical role ethics plays in AI applications, yet the pursuit of safety — for example, avoiding civilian casualties, friendly fire, and inadvertent escalation — has not received the same sort of attention.

From this we see two types of safety risks: those associated with the technology of AI in general (e.g., fairness and bias, unpredictability and unexplainability, cyber security and tampering), and those associated with specific military applications of AI (e.g., lethal autonomous systems, decision aids).

The second type of risk, being associated with specific military missions, is a military problem with a military solution obtained through military experimentation, research, and concept development to find ways to promote effectiveness along with safety.

The United Nations Convention for Certain Conventional Weapons, a forum that considers restrictions on the design and use of weapons in light of requirements of international humanitarian law, has held discussions regarding lethal autonomous weapon systems since 2014.

The U.S. position paper in 2017 emphasized how, in contrast to the concerns of some over the legality of autonomous weapons, such weapons carried promise for upholding the law and better protecting civilians in war.

This was a sincere position: Several of us on the delegation were also involved in the drafting of U.S. executive order on civilian casualties, which contained a policy commitment to make serious efforts to reduce civilian casualties in U.S. military operations.

Based on analysis of underlying causes of over 1,000 incidents, AI technologies could be used to better avoid civilian harm in ways including: These are just some examples of concrete applications of AI to promote civilian protection in conflict.

For example, many countries lament the frequency of military attacks on hospitals in recent operations, with a UN Security Council Resolution passed unanimously to promote protection of medical care in conflict in 2016.

These developments then enabled the United States to take additional steps to promote safety in the form of reduced civilian casualties: developing and fielding new types of munitions for precision engagements with reduced collateral effects, developing intelligence capabilities for more accurately identifying and locating military targets, and creating predictive tools to help estimate and avoid collateral damage.

Artificial intelligence in schools: an ethical storm is brewing

Last week the Australian Government Department of Education released the world-first research report into artificial intelligence and emerging technologies in schools.

As the project lead, and someone interested in carefully incubating emerging technologies in educational settings to develop an authentic evidence-base, I relished the opportunity to explore the often-overlooked ethical aspects of introducing new tech in to schools.

What we didn’t envisage was how artificial intelligence would become invisibly infused into the computing applications we use in everyday life such as internet search engines, smartphone assistants, social media tagging and navigation technology, and integrated communication suites.

Artificial intelligence is an umbrella term that refers to a machine or computer program that can undertake tasks or activities that require features of human intelligence such as planning, problem solving, recognition of patterns, and logical action.

Interestingly, adults and children often overestimate the intelligence and capability of machines, so it is important to understand that right now we are in a period of ‘narrow AI’ which is able to do a single or focused task, sometimes in ways that can outperform humans.

There is also some (very concerning) talk of integrating facial recognition technology into classrooms to monitor the ‘mood’ and ‘engagement’ of students despite research suggesting that inferring affective states from facial expression is fraught with difficulties.

Some of the most pressing ethical issues related to AI and ML in general, and especially for education include: AI bias AI bias where sexist, racist and other forms of discriminatory assumptions are built into the data sets that are used to train machine-learning algorithms that then become baked into AI systems.

In cases of deep machine learning there is an autonomous learning and decision-making process which occurs with minimal human intervention, with this technical process being so complicated that even the computer scientists that have created the program cannot fully explain why the machine came to a decision it did.

Biometric data collection represents a threat to the human right to bodily integrity and is legally considered sensitive data that require a very careful and fully justified position before implementation, especially with vulnerable populations such as children.

The potential for a lack of independent advice for educational leaders making decisions on use of AI and ML Regulatory capture is where those in policy and governance positions (including principals) become dependent on potentially conflicted commercial interests for advice on AI-powered products.

Furthermore, it is incumbent on educational bureaucracies to seek independent expert advice and be transparent in their policies and decision-making regarding such procurement so that school communities can have trust that the technology will not do harm through biased systems or by violating teacher and students sovereignty of their data and privacy.

In the report we carefully unpack the multi-faceted ethical dimensions of AI and ML for education systems and offer the customised Education, Ethics and AI (EEAI) framework  (below) for teachers, school leaders and policy-makers so that they can make informed decisions regarding design, implementation and governance of AI-powered systems.

Artificial Intelligence, Ethics, and Society

Pedro Domingos and Mary Gray will speak about the ethical and societal challenges raised by the spread of AI technologies in this public event co-organized by ...

Policy for Artificial Intelligence: Ethics and Inclusion for the Algorithmic Age

This recording is the first in a series of three online events on Policy for Artificial Intelligence. This global Online Civic Debate is a 6-month campaign using ...

The Ethics and Governance of AI Course: Class 6, April 10, 2018

Joi Ito and Jonathan Zittrain co-teaching the sixth class of the Ethics and Governance of AI at the MIT Media Lab. This Spring 2018 term course is a ...

Doha Debates: Artificial Intelligence

Advocates for AI defend it as manageable and say the risks are marginal, and the rewards life-improving, by empowering more people with instant information.

The Ethics and Governance of AI opening event, February 3, 2018

Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...

AI Governance Landscape

The development of artificial intelligence is well-poised to massively change the world. It's possible that AI could make life better for all of us, but many experts ...

Social and Emotional Artificial Intelligence

Given that emotion is a key element of human interaction, enabling artificial agents with the ability to reason about affect is a key stepping stone towards a future ...

AI, Ethics, and the Value Alignment Problem with Meia Chita-Tegmark and Lucas Perry

What does it mean to create beneficial artificial intelligence? How can we expect to align AIs with human values if humans can't even agree on what we value?

Writing the Playbook for Fair & Ethical Artificial Intelligence & Machine Learning (Google I/O'19)

Learn from Googlers who are working to ensure that a robust framework for ethical AI principles are in place, and that Google's products do not amplify or ...

Computing for the People: Ethics and AI

Melissa Nobles, Kenan Sahin Dean of the MIT School of Humanities, Arts, and Social Sciences and a professor of Political Science offers an introduction to a ...