AI News, Explainable AI artificial intelligence

Interesting resources related to XAI (Explainable Artificial Intelligence)

In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.

We demonstrate that, under the assumption of Lipschitz continuity, both problems can be approximated using finite optimisation by discretising the input space, and the approximation has provable guarantees, i.e., the error is bounded.

The Monte Carlo tree search algorithm is applied to compute upper bounds for both games, and the Admissible A* and the Alpha-Beta Pruning algorithms are, respectively, used to compute lower bounds for the maximum safety radius and feature robustness games.

Explanation in artificial intelligence: Insights from the social sciences

There has been a recent resurgence in the area of explainable artificial intelligence as researchers and practitioners seek to provide more transparency to their algorithms.

There exist vast and valuable bodies of research in philosophy, psychology, and cognitive science of how people define, generate, select, evaluate, and present explanations, which argues that people employ certain cognitive biases and social expectations to the explanation process.

Programming Your Way to Explainable AI @ O'Reilly AI NY 2017

Bonsai CEO Mark Hammond's talk at O'Reilly AI NY 2017: Programming Your Way To Explainable AI.

A DARPA Perspective on Artificial Intelligence

What's the ground truth on artificial intelligence (AI)? In this video, John Launchbury, the Director of DARPA's Information Innovation Office (I2O), attempts to ...

Building Explainable Machine Learning Systems: The Good, the Bad, and the Ugly

This meetup was held in New York City on 30th April. Abstract: The good news is building fair, accountable, and transparent machine learning systems is ...

Responsible AI: Why we need Explainable AI

What if decisions impacting personal concerns (e.g., work performance, lending, education, safety) are made by AI that doesn't explain itself and how can that ...

The rise of explainable AI

Read our full 2019 business intelligence trends report: tabsoft.co/BITrends.

Towards interpretable reliable models - Keynote Katharine Jarmul

Description In a world where machine learning can affect human lives in unprecedented ways, how can we create interpretable and accountable models?

Sally Radwan - What does Explainable AI Really Mean? [PWL NYC]

Golestan "Sally" Radwan on What Does Explainable AI Really Mean? A New Conceptualization of Perspectives ( by D

Explainable AI and Human Computer Interaction

Watch SEI Researcher, April Galyardt, discuss "Explainable AI and Human Computer Interaction".

"Explainable Machine Learning Models for Healthcare AI"

Title: Explainable Machine Learning Models for Healthcare AI Speakers: Ankur Teredesai, Dr. Carly Eckert, Muhammad Aurangzeb Ahmad, and Vikas Kumar ...