AI News, Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level

Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level

It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Garry Kasparov, for the first time under standard tournament rules.

Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken.

“For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading.Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas.

“For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time.

“Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai.

Deep Learning Machine Teaches Itself Chess in 72 Hours, Plays at International Master Level

It’s been almost 20 years since IBM’s Deep Blue supercomputer beat the reigning world chess champion, Garry Kasparov, for the first time under standard tournament rules.

Since then, chess-playing computers have become significantly stronger, leaving the best humans little chance even against a modern chess engine running on a smartphone.

Lai has created an artificial intelligence machine called Giraffe that has taught itself to play chess by evaluating positions much more like humans and in an entirely different way to conventional chess engines.

This training process uses lots of examples to fine-tune the connections so that the network produces a specific output given a certain input, to recognize the presence of face in a picture, for example.

These so-called deep neural networks have become hugely powerful and now routinely outperform humans in pattern recognition tasks such as face recognition and handwriting recognition.

So it’s no surprise that deep neural networks ought to be able to spot patterns in chess and that’s exactly the approach Lai has taken.

“For example, it doesn’t make sense to train the system on positions with three queens per side, because those positions virtually never come up in actual games,” he says.

That’s because although unequal positions rarely arise in real chess games, they crop up all the time in the searches that the computer performs internally.

Having trained Giraffe, the final step is to test it and here the results make for interesting reading.Lai tested his machine on a standard database called the Strategic Test Suite, which consists of 1,500 positions that are chosen to test an engine’s ability to recognize different strategic ideas.

“For example, one theme tests the understanding of control of open files, another tests the understanding of how bishop and knight’s values change relative to each other in different situations, and yet another tests the understanding of center control,” he says.

“[That] is remarkable because their evaluation functions are all carefully hand-designed behemoths with hundreds of parameters that have been tuned both manually and automatically over several years, and many of them have been worked on by human grandmasters,” he adds.

Lai says this probabilistic approach predicts the best move 46 percent of the time and places the best move in its top three ranking, 70 percent of the time.

“Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” says Lai.

This Chess Engine Learns How to Beat Humans by Playing Against Itself

Generally speaking, the way chess engines like IBM's Deep Blue work is by brute forcing every single possible move, running all of the possibilities, before picking one.

This layered approach, which is meant to mimic the process of the human brain, evaluates the game as a whole, the game by piece, and the game by what moves those pieces can make.

This is only possible because the man behind Giraffe, Matthew Lai, has used a specific set of data taken from real chess matchesto train it.That doesn't mean the chess enginedoesn't have it's own share of troublesome issues.

If the data set is too limited, the machine will fall back on poor moves, either learning bad ones or just not learning how to perform the good ones.To make sure Giraffe's data set is extensive enough, Giraffe plays itself over and over, internalizing the best moves and decisions.

How one computer taught itself to be a chess ‘international master’ in 72 hours

The self-taught chess engine, known as “Giraffe,” was designed by graduate student Matthew Lai. Computers can already squash human opponents at chess by using their great computational speed to calculate all possible moves in a game and their likelihood for ultimately yielding checkmate—they generally make the best possible move every time.

Before it ever played its first game, Giraffe “studied” 175 million chess positions generated this way, building its own understanding of the 1,500-year-old game.  Then Lai trained Giraffe at supernatural speed, making it only better at its narrowly defined task.

Giraffe’s neural network gathers potential moves, mapping its options into branches to sort its “thinking.” It is then capable of “deciding which branches are most ‘interesting’ in any given position, and should be searched further, as well as which branches to discard,” Lai writes in his paper introducing the chess engine.

“Unlike most chess engines in existence today, Giraffe derives its playing strength not from being able to see very far ahead, but from being able to evaluate tricky positions accurately, and understanding complicated positional concepts that are intuitive to humans, but have been elusive to chess engines for a long time,” Lai told MIT Technology Review.

KyleMcDonell/NeuralChess

A simple chess engine which uses a neural network as its evaluation function.

The neural network takes a 373-tuple feature which represents the a chess board state and returns a value between 1 and -1 where 1 indicates the game is won for white, 0 indicates the game is a tie, and -1 indices the game is won for black.

It includes global, piece, and square features about the board state It is designed to help the neural network latch onto important concepts about chess like positional advantage, mobility, and capturing.

While it is theoretically possible for a network to learn these concepts through training alone, it would dramatically increase training time and as training neural networks is inherently a heuristic algorithm, it may be incredibly unlikely to train a competent engine.

The Chess Engine that Died So AlphaGo Could Live

AlphaGo, a computer program designed by Google DeepMind, has just one more game to go against top-ranked Go player Lee Sedol in Seoul, South Korea, in a five-game match reminiscent of the 1997 showdown between Deep Blue and Garry Kasparov.

The fact that we've moved from teaching computers chess to training them on how to play the more complex game of Go shows the advances computer scientists have made in programming intelligence.

That's a romantic way of saying it doesn't just brute force the calculations, but instead relies on patterns learned from chess games played by humans.

Artificial intelligence, on the other hand, is 'a synthetic thing that performs behavior that any reasonable person believes might have required intelligence,' in the words of Georgia Institute of Technology associate professor Mark Riedl.

The machine learning approach would consist of recording several hours of humans playing the game, and then showing this to the machine, who would then analyze this data, extract common patterns, and discover where they are applied in order to successfully play.'

At the end of any given match, Giraffe essentially gave the moves used by the winner a more favorable rating because those moves ultimately turned out to be good.

The program then used those better moves in the next match—not unlike how a person learning chess might emulate their opponents after seeing strategies that worked.

Deep Blue's decisions are essentially pure calculation: going through every possible move, picking out the best one based on an internal value assigned to it based on piece value as well as positional advantage, and then making that move.

The story, which described Lai's accomplishment as 'a world first,' explained the layers of Giraffe's neural network: 'The first looks at the global state of the game, such as the number and type of pieces on each side, which side is to move, castling rights and so on.

It was because he'd simply learned too many trade secrets from working at DeepMind, and any side work he did on Giraffe would inevitably violate the terms of his contract with the company formerly known as Google.

Academic researchers, and those having just graduated with advanced degrees like Lai, have long been sought out and employed by large businesses with investments in those sectors to further corporate goals.

Sebastian Thrun, the man behind NeuroChess, helped develop Google Street View and worked on driverless cars at the secretive Google X lab after developing an award-winning driverless car design at Stanford.

Intellectual property can become a legal minefield with murky boundaries that sometimes includes stipulations that any advancement in the field ultimately belongs to the company.

There's no guarantee that an updated version of Giraffe—with Lai relegated to using only his own resources—would have given the better engines that rely on brute-force calculation a run for their money, and now he's off doing work with Google DeepMind on projects that will almost certainly shape how we talk and think about machine learning, artificial intelligence, and more going forward.

Members of far-right militia group storm parliament in Kiev

Around 15-hundred members of a far-right militia group have attacked the Ukrainian parliament in Kiev. The members of the so-called Right Sector smashed ...

U23 Việt Nam - Nhà vô địch trong trái tim NHM 💗

TRONG KHI U23 THÁI LAN, NHẬT BẢN,HÀN QUỐC THẢM BẠI THÌ U23 VIỆT NAM HIÊN NGANG VÀO CHUNG KẾT. DÙ CHƯA ĐÁ TRẠN CHUNG KẾT ...

The Last Reformation - The Beginning (2016) - FULL MOVIE

Bonus videos, DVD and more: This documentary is changing the world right now! It shows how life must have been for the first disciples ..

Slim Whitman,I remember you

I Remember You.

K Camp - Comfortable

K Camp's debut album “Only Way Is Up” Available NOW iTunes Deluxe Explicit: Google Play Standard Explicit: ..

Schäuble ist unzufrieden mit Griechenland

Wolfgang Schäuble: " Die Reformvorschläge reichen nicht."

The Man From U.N.C.L.E. (2015)

Set against the backdrop of the early 1960s,at the height of the Cold War,THE MAN FROM U.N.C.L.E. centers on CIA agent Solo (Man of Steel's Henry Cavill) ...

Demis Hassabis: Towards General Artificial Intelligence

Dr. Demis Hassabis is the Co-Founder and CEO of DeepMind, the world's leading General Artificial Intelligence (AI) company, which was acquired by Google in ...