AI News, EECS Events artificial intelligence

Aleksander Madry on building trustworthy artificial intelligence

EDITOR'S NOTE: Machine learning algorithms now underlie much of the software we use, helping to personalize our news feeds and finish our thoughts before we’re done typing.

Aleksander Madry, an associate professor of computer science at MIT and a lead faculty member of the Computer Science and Artificial Intelligence Lab (CSAIL)’s Trustworthy AI initiative, compares AI to a sharp knife, a useful but potentially-hazardous tool that society must learn to wield properly. Madry recently spoke at MIT’s Symposium on Robust, Interpretable AI, an event co-sponsored by the MIT Quest for Intelligence and CSAIL, and held in late November 2018 in Singleton Auditorium.

Six faculty members spoke about their research, 40 students presented posters, and Madry opened the symposium with a talk the aptly titled, “Robustness and Interpretability.” We spoke with Madry, a leader in this emerging field, about some of the key ideas raised during the event.

Q: AI owes much of its recent progress to deep learning, a branch of machine learning that has significantly improved the ability of algorithms to pick out patterns in text, images and sounds, giving us automated assistants like Siri and Alexa, among other things.

Specifically, I work on developing next-generation machine-learning systems that will be reliable and secure enough for mission-critical applications like self-driving cars and software that filters malicious content. We’re currently building tools to train object-recognition systems to identify what’s happening in a scene or picture, even if the images fed to the model have been manipulated.

In her talk, “Robustness in GANs and in Black-box Optimization,” Stefanie Jegelka showed how the learner in a generative adversarial network, or GAN, can be made to withstand manipulations to its input, leading to much better performance.  Q: The neural networks that power deep learning seem to learn almost effortlessly: Feed them enough data and they can outperform humans at many tasks.

Visualizing the object-recognition process allows software developers to get a more fine-grained understanding of how the network learns.  Another way to achieve interpretability is to precisely define the properties that make the model understandable, and then train the model to find that type of solution. Tommi Jaakkola showed in his talk, “Interpretability and Functional Transparency,” that models can be trained to be linear or have other desired qualities locally while maintaining the network’s overall flexibility.

Rob Wood: The Mechanical Side of AI

EECS Colloquium Wednesday, February 1, 2017 306 Soda Hall (HP Auditorium) 4-5p Captions available upon request.

2017 BEARS: Panel 2 -Long-Term Future of (Artificial) Intelligence

Berkeley EECS Annual Research Symposium 2/9/17 Panel 2 - Long-Term Future of (Artificial) Intelligence Provably Beneficial AI - Stuart Russell (0:00-24:25) ...

Chin-Liang Chang: Artificial Intelligence with Search-Based Function Computation Model

Symposium of Fuzzy Logic and Fuzzy Sets: A Tribute to Lotfi Zadeh February 5, 2018 Captions available upon request.

Ranveer Chandra: FarmBeats

Empowering Farmers with Artificial Intelligence Agriculture Solutions EECS Colloquium Wednesday, September 12, 2018 306 Soda Hall (HP Auditorium) 4-5p ...

Manuel Blum: Towards a Conscious AI

A Computer Architecture Inspired by Neuroscience October 17, 2018 Berkeley ACM A.M. Turing Laureate Colloquium Captions available upon request.

Dave Fick, Mythic, Inc.

"The Benefits of Mixed-Signal Computing for Artificial Intelligence" EECS Colloquium Wednesday, September 26, 2018 306 Soda Hall (HP Auditorium) 4-5p.

Fei-Fei Li: A Quest for Visual Intelligence in Computers

EECS Colloquium, UC Berkeley Wednesday, November 9, 2016 Captions available upon request.

The Future of Intelligent Systems - Sarah Bird (Microsoft)

The Future of Intelligent Systems - Sarah Bird (Microsoft) 40 Years of Patterson Symposium. Saturday, May 7, 2016.

AAAI-17 Invited Panel on AI History: Expert Systems

The event was recorded on February 6, 2017 Moderator: David C. Brock (Historian, Computer History Museum, Mountain View, California) Panelists: Edward ...