AI News, AI researchers allege that machine learning is alchemy
- On Sunday, June 3, 2018
- By Read More
AI researchers allege that machine learning is alchemy
Ali Rahimi, a researcher in artificial intelligence (AI) at Google in San Francisco, California, took a swipe at his field last December—and received a 40-second ovation for it.
As Rahimi puts it, 'I'm trying to draw a distinction between a machine learning system that's a black box and an entire field that's become a black box.'
Without deep understanding of the basic tools needed to build and train new algorithms, he says, researchers creating AIs resort to hearsay, like medieval alchemists.
For example, he says, they adopt pet methods to tune their AIs' 'learning rates'—how much an algorithm corrects itself after each mistake—without understanding why one is better than others.
For example, it notes that when other researchers stripped most of the complexity from a state-of-the-art language translation algorithm, it actually translated from English to German or French better and more efficiently, showing that its creators didn't fully grasp what those extra parts were good for.
Ben Recht, a computer scientist at the University of California, Berkeley, and coauthor of Rahimi's alchemy keynote talk, says AI needs to borrow from physics, where researchers often shrink a problem down to a smaller 'toy problem.'
Some AI researchers are already taking that approach, testing image recognition algorithms on small black-and-white handwritten characters before tackling large color photos, to better understand the algorithms' inner mechanics.
Yann LeCun, Facebook's chief AI scientist in New York City, worries that shifting too much effort away from bleeding-edge techniques toward core understanding could slow innovation and discourage AI's real-world adoption.
- On Wednesday, September 18, 2019
The Trouble with Bias - NIPS 2017 Keynote - Kate Crawford #NIPS2017
Kate Crawford is a leading researcher, academic and author who has spent the last decade studying the social implications of data systems, machine learning ...
Ali Rahimi - NIPS 2017 Test-of-Time Award presentation
"Let's take machine learning from alchemy to electricity." The Test-of-Time Award was granted for the paper "Random Features for Large-Scale Kernel ...
NIPS 2017: Helping AI systems interact more naturally
A team of researchers from IBM and the University of Illinois at Urbana–Champaign discuss their paper from NIPS 2017, which outlines a new supervised ...
Brendan Frey: Why AI Will Make it Possible to Reprogram the Human Genome (NIPS 2017 keynote)
Abstract: We have figured out how to write to the genome using DNA editing, but we don't know what the outcomes of genetic modifications will be. This is called ...
Head of NVIDIA AI Labs Sums Up NIPS 2017
We had an amazing week at the NIPS 2017 research conference. Catch up on you missed here.
Ali Rahimi's talk at NIPS(NIPS 2017 Test-of-time award presentation)
NIPS 2015 Workshop (Zou) 15500 The 1st International Workshop "Feature Extraction: Modern Quest...
UPDATE: The workshop proceedings will be published in a special issue of The Journal Of Machine Learning Research prior to the workshop date. For that ...
NIPS 2017: Reducing Unfair Discrimination in AI
IBM Research staff member Kush Varshney discusses the paper, "Reducing unfair discrimination in AI," that will be presented at NIPS 2017 and details a ...
Babble Labble: Learning From Natural Language Explanations (NIPS 2017 Demo)
This video accompanied the Babble Labble Demo presented at NIPS 2017. The Babble Labble framework converts natural language explanations into weak ...
Yee Whye Teh: On Bayesian Deep Learning and Deep Bayesian Learning (NIPS 2017 Keynote)
Breiman Lecture by Yee Whye Teh on Bayesian Deep Learning and Deep Bayesian Learning. Abstract: Probabilistic and Bayesian reasoning is one of the ...