AI News, Artificial Intelligence Goes to the Arcade

Artificial Intelligence Goes to the Arcade

A shaky video, recorded with a mobile phone and smuggled out of the inaugural First Day of Tomorrow technology conference, in April, 2014, shows an artificially intelligent computer program in its first encounter with Breakout, the classic Atari arcade game.

has since achieved the same feat with an angling game (Fishing Derby, 1980), a chicken-crossing-the-road game (Freeway, 1981), an armored-vehicle game (Robot Tank, 1983), a martial-arts game (Kung-Fu Master, 1984), and twenty-five others.* In more than a dozen of them, including Stargunner and Crazy Climber, from 1982, it made the best human efforts look pathetic.

Google bought the firm for six hundred and fifty million dollars in January, 2014, soon after Hassabis first demonstrated his program’s superhuman gaming abilities, at a machine-learning workshop in a Harrah’s casino on the edge of Lake Tahoe.

Apple’s Siri uses such a network to decipher speech, sorting sounds into recognizable chunks before drawing on contextual clues and past experiences to guess at how best to group them into words. Siri’s deductive powers improve (or ought to) every time you speak to her or correct her mistakes.

Rather than staring uncomprehendingly at the noise, however, a program like DeepMind’s will start analyzing those pixels—sorting them by color, finding edges and patterns, and gradually developing an ability to recognize complex shapes and the ways in which they fit together.

Combined with the deep neural network, this gives the program more or less the qualities of a good human gamer: the ability to interpret the screen, a knack for learning from past mistakes, and an overwhelming drive to win.

Whipping humanity’s ass at Fishing Derby may not seem like a particularly noteworthy achievement for artificial intelligence—nearly two decades ago, after all, I.B.M.’s Deep Blue computer beat Garry Kasparov, a chess grandmaster, at his own more intellectually aspirational game—but according to Zachary Mason, a novelist and computer scientist, it actually is.

Hassabis, who began working as a game designer in 1994, at the age of seventeen, and whose first project was the Golden Joystick-winning Theme Park, in which players got ahead by, among other things, hiring restroom-maintenance crews and oversalting snacks in order to boost beverage sales, is well aware that DeepMind’s current system, despite being state of the art, is at least five years away from being a decade behind the gaming curve.

Because of the rote reinforcement learning, he said, “it’s overexploiting the knowledge that it already knows.” In the longer term, after DeepMind has worked its way through Warcraft, StarCraft, and the rest of the Blizzard Entertainment catalogue, the team’s goal is to build an A.I.

Beyond that challenge lies the much thornier question of whether DeepMind’s chosen combination of a deep neural network and reinforcement learning could, on its own, ever lead to conceptual cognition—not only a fluency with the mechanics of, say, 2001’s Sub Command but also an understanding of what a submarine, water, or oxygen are.

DeepMind founder Demis Hassabis on how AI will shape the future

The main future uses of AI that you’ve brought up this week have been healthcare, smartphone assistants, and robotics.

We announced a partnership with the NHS a couple of weeks ago but that was really just to start building a platform that machine learning can be used in.

I think the sort of things you’ll see this kind of AI do is medical diagnosis of images and then maybe longitudinal tracking of vital signs or quantified self over time, and helping people have healthier lifestyles.

Well, NHS software as I understand it is pretty terrible, so I think the first step is trying to bring that into the 21st century.

It’s mostly big multinational corporations that are doing this software so they don’t really pay attention to the users, whereas we’re designing it more in a startup sort of way where you really listen to the feedback from your users and you’re kind of co-designing it with them.

Actually, the AlphaGo algorithm, this is something we’re going to try in the next few months — we think we could get rid of the supervised learning starting point and just do it completely from self-play, literally starting from nothing.

It’d take longer, because the trial and error when you’re playing randomly would take longer to train, maybe a few months.

The Atari games that we did last year, playing from the pixels — that didn’t bootstrap from any human knowledge, that started literally from doing random things on screen.

the problem is you’ve made a hundred actions or moves in Go, and you don’t know exactly which ones were responsible for winning or losing, so the signal’s quite weak.

Maybe looking four to five, five-plus years away you’ll start seeing a big step change in capabilities.

Alpha Zero’s “Alien” Chess Shows the Power, and the Peculiarity, of AI

The latest AI program developed by DeepMind is not only brilliant and remarkably flexible—it’s also quite weird.

DeepMind published a paperthis week describing a game-playing program it developed that proved capable of masteringchess and the Japanese game Shoju, having already mastered the game of Go.

“It plays in a third, almost alien, way.” Besides showing how brilliant machine-learning programs can be at a specific task, this shows that artificial intelligence can be quite different from the human kind.

What’s also remarkable, though, Hassabis explained, is that it sometimes makes seemingly crazy sacrifices, like offering up a bishop and queen to exploit a positional advantage that led to victory.

The original AlphaGo, designed specifically for Go, was a big deal because it was capable of learning to play a game that is enormously complex and is difficult to teach, requiring an instinctive sense of board positions.

It did this partially by training a large neural network using an approach known as reinforcement learning, which is modeled on the way animals seem to learn (see “Google’s AI Masters Go a Decade Earlier Than Expected”).

Josh Tenenbaum, a professor at MIT who studies human intelligence, said that if we want to develop real, human-level artificial intelligence, we should study the flexibility and creativity that humans exhibit.

Google reveals secret test of AI bot to beat top Go players

An artificial intelligence (AI) program from Google-owned company DeepMind has reached superhuman level at the strategy game Go — without learning from any human moves.

In the nearer-term, though, it could enable programs to take on scientific challenges such as protein folding or materials research, said DeepMind chief executive Demis Hassabis at a press briefing.

“We’re quite excited because we think this is now good enough to make some real progress on some real problems.” Previous Go-playing computers developed by DeepMind, which is based in London, began by training on more than 100,000 human games played by experts.

After 40 days of training and 30 million games, the AI was able to beat the world's previous best 'player' — another DeepMind AI known as AlphaGo Master.

That the team could build such an algorithm that surpassed previous versions using less training time and computer power “is nothing short of amazing”, he adds.

Like its predecessors, AlphaGo Zero uses a deep neural network — a type of AI inspired by the structure of the brain — to learn abstract concepts from the boards.

It started off trying greedily to capture stones, as beginners often do, but after three days it had mastered complex tactics used by human experts.

It still required a huge amount of computing power — four of the specialized chips called tensor processing units, which Hassabis estimated to be US$25 million of hardware.

Generating examples of protein folding can involve years of painstaking crystallography, so there are few data to learn from, and there are too many possible solutions to predict structures from amino-acid sequences using a brute-force search.

DeepMind

or a neural network that may be able to access an external memory like a conventional Turing machine, resulting in a computer that mimics the short-term memory of the human brain.[7][8]

more generic program, AlphaZero, beat the most powerful programs playing go, chess and shogi (Japanese chess) after a few hours of play against itself using reinforcement learning.[11]

During one of the interviews, Demis Hassabis said that the start-up began working on artificial intelligence technology by teaching it how to play old games from the seventies and eighties, which are relatively primitive compared to the ones that are available today.

DeepMind has opened a new unit called DeepMind Ethics and Society and focused on the ethical and societal questions raised by artificial intelligence featuring prominent transhumanist Nick Bostrom as advisor.[32]

In 2017 DeepMind released GridWorld, an open-source testbed for evaluating whether an algorithm learns to disable its kill switch or otherwise exhibits certain undesirable behaviours.[40][41]

To date, the company has published research on computer systems that are able to play games, and developing these systems, ranging from strategy games such as Go[42]

According to Shane Legg, human-level machine intelligence can be achieved 'when a machine can learn to play a really wide range of games from perceptual stream input and output, and transfer understanding across games[...].'[43]

Hassabis has mentioned the popular e-sport game StarCraft as a possible future challenge, since it requires a high level of strategic thinking and handling imperfect information.[44]

As opposed to other AIs, such as IBM's Deep Blue or Watson, which were developed for a pre-defined purpose and only function within its scope, DeepMind claims that its system is not pre-programmed: it learns from experience, using only raw pixels as data input.

Without altering the code, the AI begins to understand how to play the game, and after some time plays, for a few games (most notably Breakout), a more efficient game than any human ever could.[47]

In October 2015, a computer Go program called AlphaGo, developed by DeepMind, beat the European Go champion Fan Hui, a 2 dan (out of 9 dan possible) professional, five to zero.[48]

Go is considered much more difficult for computers to win compared to other games like chess, due to the much larger number of possibilities, making it prohibitively difficult for traditional AI methods such as brute-force.[48][49]

It allows machines and software agents to automatically determine the ideal behavior within a specific context, in order to maximize its performance.

After training these networks employed a lookahead Monte Carlo tree search (MCTS), using the policy network to identify candidate high-probability moves, while the value network (in conjunction with Monte Carlo rollouts using a fast rollout policy) evaluated tree positions.[56]

DeepMind has also collaborated with the Android team at Google for the creation of two new features which will be available to people with devices running Android P, the ninth installment of Google's mobile operating system.

It is the first time DeepMind has used these techniques on such a small scale, with typical machine learning applications requiring orders of magnitude more compute power.[63]

In August 2016, a research programme with University College London Hospital was announced with the aim of developing an algorithm that can automatically differentiate between healthy and cancerous tissues in head and neck areas.[65]

Staff at the Royal Free Hospital were reported as saying in December 2017 that access to patient data through the app had saved a ‘huge amount of time’ and made a ‘phenomenal’ difference to the management of patients with acute kidney injury.

Additionally, in February 2018, DeepMind announced it was working with the U.S. Department of Veterans Affairs in an attempt to use machine learning to predict the onset of Acute Kidney Injury in patients, and also more broadly the general deterioration of patients during a hospital stay so that doctors and nurses can more quickly treat patients in need.[69]

The agreement shows DeepMind Health had access to admissions, discharge and transfer data, accident and emergency, pathology and radiology, and critical care at these hospitals.

This included personal details such as whether patients had been diagnosed with HIV, suffered from depression or had ever undergone an abortion in order to conduct research to seek better outcomes in various health conditions.[70][71]

In May 2017, Sky News published a leaked letter from the National Data Guardian, Dame Fiona Caldicott, revealing that in her 'considered opinion' the data-sharing agreement between DeepMind and the Royal Free took place on an 'inappropriate legal basis'.[74]

Google DeepMind's Deep Q-learning playing Atari Breakout

Google DeepMind created an artificial intelligence program using deep reinforcement learning that plays Atari games and improves itself to a superhuman level.

Google's Deep Mind Explained! - Self Learning A.I.

Subscribe here: Become a Patreon!: Visual animal AI: .

Demis Hassabis, CEO, DeepMind Technologies - The Theory of Everything

Demis Hassabis, DeepMind - Learning From First Principles - Artificial Intelligence NIPS2017

December 9th, 2017 Demis Hassabis is a British artificial intelligence researcher, neuroscientist, computer game designer, entrepreneur, and world-class games ...

Google's DeepMind AI Just Taught Itself To Walk

Google's artificial intelligence company, DeepMind, has developed an AI that has managed to learn how to walk, run, jump, and climb without any prior guidance ...

Deepmind artificial intelligence @ FDOT14

Smartphone bad audio quality Deepmind technology wrecking old Atari games, presented by neuroscientist and game developper Demis Hassabis, Deepmind ...

Elon Musk’s A.I. Destroys Champion Gamer!

Subscribe here: Check out the previous episode: Become a Patreo

DeepMind - Multiple Scales of Reward & Task Learning - Jane Wang

Jane Wang of DeepMind presents Multiple scales of reward and task learning at NIPS2017 on December 7th, 2017.

Demis Hassabis: Towards General Artificial Intelligence

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world's leading General Artificial Intelligence (AI) company, which was acquired by Google in ...

The Role of Multi-Agent Learning in Artificial Intelligence Research at DeepMind

Event Blurb: In computer science, an agent can be thought of as a computational entity that repeatedly perceives the environment, and takes action so as to ...