AI News, Center for the Governance of AI artificial intelligence

Emerj CEO at AI Governance Roundtable at the World Government Summit in Dubai

There was a roundtable on AI in education, and we were able to speak to some of the research we’ve done on AI in India and some of the initiatives being taken by the Indian government on education, namely natural language processing for the translation of educational texts into different Indian languages.

After all the roundtables, there were presentations from the UAE government in which they spoke about some of the AI projects they are aiming to undertake, as well as some presentations from thinkers such as Daniel Kahneman, who spoke about his thoughts and concerns about AI and its promise and opportunity.

Assembly: Building An Interdisciplinary Project From The Ground Up

by Hilary Ross What do a Navy SEAL, an open data advocate, an ethicist, a former game designer, and a software engineer have in common?

Since we’re halfway through this year’s program, we thought we’d share what we’ve been doing — for readers interested in AI ethics, participating in the next version of Assembly, or even hosting a similar program.

In the past three years, we’ve iterated on Assembly’s model and structure.This post discusses how we set up participants to succeed in this year’s program, and highlights nascent Assembly project ideas.

What surprised and delighted me were the other types of philosophical and personal questions people asked — that I had not only not thought to ask, but caused me to learn about myself.” Given that Assembly brings participants together across disciplines, it was critical to make space to surface differing assumptions and language.

To kick things off, Professors Jonathan Zittrain and Joi Ito, the program’s faculty leads, led a mini-course on the overall state of the AI field, the design and training of AI systems, and the deployment and governance of AI systems.

“They also clearly laid out the big unanswered questions, such as loci of responsibility and agency, how AI can perpetuate and amplify harm to vulnerable groups, and why ideals like fairness, interpretability, and explainability are elusive.” Other speakers included Professor Krzysztof Gajos from Harvard School of Engineering and Sciences, Kade Crockford from the ACLU of Massachusetts, and Miranda Bogen from Upturn.

Over the next four days — through many discussions, post-it exercises, whiteboard drawings, and feedback sessions — each group explored their problem space and began to generate more specific ideas for a prototype, provocation, or intervention.

Is Artificial Intelligence Good for Our Health?

Earlier this year, the Health Tech Working Group (HTWG) at the Berkman Klein Center released a paper entitled: “A User-Focused Transdisciplinary Research Agenda for AI-Enabled Health Tech Governance.” The paper examined the challenges in using artificial intelligence to improve healthcare and more specifically in data governance and security.

The paper brings together an interdisciplinary team’s approach to health technologies and provides a holistic approach to exploring three areas of particular interest: The paper raises issues around how to build in transparency and accountability to make sure that the systems are working in the best interest of the users.

Most of the conversations around healthcare data only refer to traditional sources such as electronic health records (EHRs), the HTWG includes in their discussion well-being data from fitness devices and other applications that can be used to assess or impact health and well-being.

Well-being data such as sleep pattern, motion and heart rate from wearables is not covered by the same privacy protections as medical records but can be used to infer things about patients that, if they were stored in the patient’s EHR, would be covered by the Health Insurance Portability and Accountability Act (HIPAA).

In addition, user data may be divided more generally into three categories: non-sensitive, sensitive, and ‘grey area’, aimed at giving the user/patient agency over how their personal data is used to the greatest extent.

The term “digital nudging” emerged only recently in engineering and computer systems literature, and is defined as the “use of user-interface design elements to guide people’s behavior in digital choice environments.” Digital nudges can be personalized and driven by AI assistants or can be programmed into applications so they are seen the same way by all users.

One key difference between nudging in the physical world and a digital nudge is that — at least, theoretically — it is possible to opt-in or explicitly consent to a nudging system through personally-owned devices like web browsers or phones.

knowledge of this kind requires a sense of proportionality (such that the AI can judge when a medical event important enough to justify an interruption occurs) as well as “social skills” so that an AI can communicate effectively while respecting the skills and judgment of the other team members.

Even the option of disclosing one’s data in return of lower premium could function as an ethically questionable incentive if it results in non-disclosure being “punished.” Given the heavy reliance on data sets for AI solutions and that only a few select countries have embarked in the AI race, societies need (both national and international) safeguards to mitigate biases in data collection, prevent discrimination and generalization of AI technologies and assure equitable access to data.

AI Governance Landscape

The development of artificial intelligence is well-poised to massively change the world. It's possible that AI could make life better for all of us, but many experts ...

Responsible Governance of Artificial Intelligence - The AI Lab

E. Pauwels, Director of The AI Lab, Wilson Center (DC): From smart cities, precision medicine to military intelligence, the ongoing AI revolution will be pervasive, ...

The Ethics and Governance of AI opening event, February 3, 2018

Chapter 1: 0:04 - Joi Ito Chapter 2: 1:03:27 - Jonathan Zittrain Chapter 3: 2:32:59 - Panel 1: Joi Ito moderates a panel with Pratik Shah, Karthik Dinakar, and ...

Artificial Intelligence and the Future of Government

The Centre for Public Impact is investigating the way in which artificial intelligence (AI) will change outcomes for citizens. We are interested in how the emerging ...

AI Overlords! 25% of Europeans Would Prefer Artificial Intelligence Run Government

More than a quarter of Europeans would rather have their countries' important political decisions made by artificial intelligence than their elected and unelected ...

Programming the Future of AI: Ethics, Governance, and Justice

How do we prepare court systems, judges, lawyers, and defendants to interact with autonomous systems? What are the potential societal costs to human ...

Preparing for AI: risks and opportunities | Allan Dafoe | EAG 2017 London

AI will transform employment, wealth, power, and world order, posing tremendous opportunities and risks. Allan Dafoe will reflect on how we can prepare for and ...

A Hastings Center Conversation: Artificial Intelligence

Can We Keep Artificial Intelligence From Slipping Beyond Our Control? Physicist Stephen Hawking and technology innovator Elon Musk have raised concerns ...

The Public Policy Challenges of Artificial Intelligence

A Conversation with Dr. Jason Matheny Director, Intelligence Advanced Research Projects Activity (IARPA) Eric Rosenbach (Moderator) Co-Director, Belfer ...