AI News, Questioning AI: what can scientists learn from artificial intelligence? – Science Weekly podcast
Questioning AI: what can scientists learn from artificial intelligence? – Science Weekly podcast
Subscribe and review on Apple Podcasts, Soundcloud, Audioboom, Mixcloud and Acast, and join the discussion on Facebook and Twitter In October 2017, researchers at Google DeepMind published a paper on an artificial intelligence (AI) program called AlphaGo Zero.
Talking about the achievement, lead researcher David Silver explained that AlphaGo Zero had invented “its own variants which humans don’t even know about or play at the moment.” And it’s here that a new and exciting use for AI comes to light.
Highlights from our work in 2016
During the games, AlphaGo played a handful of highly inventive winning moves, several of which - including move 37 in game two - were so surprising they overturned hundreds of years of received wisdom, and have since been examined extensively by players of all levels.
Unlike the earlier versions of AlphaGo which learnt how to play the game using thousands of human amateur and professional games, AlphaGo Zero learnt to play the game of Go simply by playing games against itself, starting from completely random play. In doing so, it surpassed the performance of all previous versions, including those which beat the World Go Champions Lee Sedol and Ke Jie, becoming arguably the strongest Go player of all time.
A computer taught itself the toughest game on the planet. And it's just getting started
Researchers at Deepmind, part of the Google family of companies, have taken a new approach with this AI.According to team leader, Dr. David Silver, older versions learned by assimilating the playing styles of the best human Go players.The new version, called AlphaGo Zero, taught itself how to play,and practiced against itself as well.AlphaGoZero then beat the programthat beat the best humans in 100 straight games.
How to Go from good to better The new self-taught version of AlphaGo is not only more effective than older versions, it's more creative.In teaching itself, it re-discovered many of the patterns of play that humans have developed and used, but also found new ones onits ownwhich were superior to the ones human players used.
DeepMind has a bigger plan for its newest Go-playing AI
The software is a distillation of previous systems DeepMind has built: It’s more powerful, but simpler and doesn’t require the computational power of a Google server to run.
AlphaGo Zero isn’t the first algorithm to learn from self-play—Elon Musk’s nonprofit OpenAI has used similar techniques to train an AI playing a video game—but its capabilities show that it’s one of the most powerful examples of the technology to date.
“By not using this human data, by not using human features or human expertise in any fashion, we’ve actually removed the constraints of human knowledge,”
Instead of Go moves, DeepMind claims the AlphaGo Zero algorithm will be able to learn the interactions between proteins in the human body to further scientific research, or the laws of physics to help create new building materials.
The idea of using AI to help mine the vast potential combinations of molecules to built a super-battery or some other futuristic device isn’t new;
DeepMind’s first paper in Nature last year showed that the algorithm learned for a while from how humans played the game, and then started to play itself to refine those skills.
AlphaGo Zero could beat the version of AlphaGo that faced Lee Sedol after training for just 36 hours and earned its 100-o score after 72 hours.
It’s not brute computing power that did the trick either: AlphaGo Zero was trained on one machine with 4 of Google’s speciality AI chips, TPUs, while the previous version was trained on servers with 48 TPUs.
Simple, general methods are valued in AI research because less effort is required to bring that same solution to other problems, Tim Salimans, an AI research scientist at OpenAI told Quartz in an email.
AlphaGo Zero Shows Machines Can Become Superhuman Without Any Help
Whereas the original AlphaGo learned by ingesting data from hundreds of thousands of games played by human experts, AlphaGo Zero, also developed by the Alphabet subsidiary DeepMind, started with nothing but a blank board and the rules of the game.
Hassabis says the techniques used to build AlphaGo Zero are powerful enough to be applied in real-world situations where it’s necessary to explore a vast landscape of possibilities, including drug discovery and materials science.
Reinforcement learning is inspired by the way animals seem to learn through experimentation and feedback, and DeepMind has used the technique to achieve superhuman performance in simpler Atari games.
It is already being tested as a way to teach robots to grasp awkward objects, for example, and as a means of conserving energy in data centers by reconfiguring hardware on the fly.
“By not using human data or human expertise, we’ve actually removed the constraints of human knowledge,” says David Silver, the lead researcher at DeepMind and a professor at University College London.
DeepMind is already the darling of the AI industry, and its latest achievement is sure to grab headlines and spark debate about progress toward much more powerful forms of AI.
“It’s a nice illustration of the recent progress in deep learning and reinforcement learning, but I wouldn’t read too much into it as a sign of what computers can learn without human knowledge,” Domingos says.
“What would be really impressive would be if AlphaGo beat [legendary South Korean champion] Lee Sedol after playing roughly as many games as he played in his career before becoming a champion.
But despite the work still to be done, Hassabis is hopeful that within 10 years AI will play an important role in solving important problems in science, medicine, or other fields.
- On Thursday, February 21, 2019
The Future of Go Summit: AlphaGo Pair Go & Team Go
Watch AlphaGo team up with some of China's top pro players to explore the hidden mysteries of the game in two exciting match formats: Pair Go: where one Chinese pro will play against another...
Match 2 - Google DeepMind Challenge Match: Lee Sedol vs AlphaGo
Watch DeepMind's program AlphaGo take on the legendary Lee Sedol (9-dan pro), the top Go player of the past decade, in a $1M 5-game challenge match in Seoul. This is the livestream for Match...
Campus Presents: Demis Hassabis, Founder and CEO, Deepmind
Demis Hassabis is the founder and CEO of DeepMind, a neuroscience-inspired AI company, bought by Google in Jan 2014 in their largest European acquisition to date. He leads projects including...
Exploring the mysteries of Go with AlphaGo and China's top players
DeepMind is excited to share the news that AlphaGo, China's top Go players, and leading A.I. experts from Google and China will come together in Wuzhen, in May 2017 for the “Future of Go Summit".
John Searle: "Consciousness in Artificial Intelligence" | Talks at Google
John Searle is the Slusser Professor of Philosophy at the University of California, Berkeley. His Talk at Google is focused on the philosophy of mind and the potential for consciousness in...
Demis Hassabis, CEO, DeepMind Technologies - The Theory of Everything
Artificial Intelligence Panel: Who is in Control of AI? - Part 1
Part 2 here: -- Will progress in Artificial Intelligence provide humanity with a boost of unprecedented strength to realize a better future, or could..
Demis Hassabis on AlphaGo: its legacy and the 'Future of Go Summit' in Wuzhen, China
DeepMind co-founder and CEO, Demis Hassabis discusses the story of AlphaGo so far, and his hopes for this month's 'Future of Go Summit' in Wuzhen, China.
Adam Kucharski: "The Perfect Bet" | Talks at Google
Adam came into our London office to discuss the long and tangled history between betting and science, and explains why gambling continues to generate insights into luck and decision-making....
Google I/O Keynote (Google I/O '17)
Join us to learn about product and platform innovation at Google. See all the talks from Google I/O '17 here: Watch more Android talks at I/O '17 here: