AI News, Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine"

Google's Secretive DeepMind Startup Unveils a "Neural Turing Machine"

One of the great challenges of neuroscience is to understand the short-term working memory in the human brain.

Today, Google’s secretive DeepMind startup, which it bought for $400 million earlier this year, unveils a prototype computer that attempts to mimic some of the properties of the human brain’s short-term working memory.

Miller was interested in the capacity of the human brain’s working memory and set out to measure it with the help of a large number of students who he asked to carry out simple memory tasks.

In Miller’s experiments, a chunk could be a single digit such as a 4, a single letter such as a q, a single word or a small group of words that together have some specific meaning.

Consider the following sentence: “This book is a thrilling read with a complex plot and lifelike characters.” This sentence consists of around seven chunks of information and is clearly manageable for any ordinary reader.

By contrast, try this sentence: “This book about the Roman Empire during the first years of Augustus Caesar’s reign at the end of the Roman Republic, describes the events following the bloody Battle of Actium in 31 BC when the young emperor defeated Mark Antony and Cleopatra by comprehensively outmaneuvering them in a major naval engagement.” This sentence contains at least 20 chunks.

So in this case, it would assign the role of actor to Mary, the role of action to the words “spoke to” and the role of receiver of the action to “John.” It is this task that DeepMind’s work addresses, despite the very limited performance of earlier machines.

The difference is that the neural network might store more complex patterns of variables representing, for example, the word “Mary.” Since this form of computing differs in an important way from a conventional neural network, Graves and co give it a new name—they call it a Neural Turing Machine, the first of its kind to have been built.

For a sequence of length 120, errors begin to creep in, including one error in which a single term is duplicated and so pushes all of the following terms one step back.

Although the sequences involved are random, it’s not hard to imagine how they might represent more complex ideas such as “Mary” or “spoke to” or “John.” An important point is that the amount of information these sequences contain is variable, like chunks.

An interesting question that follows from Miller’s early work is this: if our working memory is only capable of handling seven chunks, how do we make sense of complex arguments in books, for example, that consists of thousands or tens of thousands of chunks?

Our brain automatically knows that “it” means: “the book that is a thrilling read with a complex plot and lifelike characters.” It has recoded the seven earlier chunks into a single chunk.

Lecture 10 | Recurrent Neural Networks

In Lecture 10 we discuss the use of recurrent neural networks for modeling sequence data. We show how recurrent neural networks can be used for language ...

Michio Kaku: Could We Transport Our Consciousness Into Robots?

If we were able to move our brains, neuron-for-neuron, into a robot, would we still be the same person? Read more at BigThink.com: ...

Deep Net Performance - Ep. 24 (Deep Learning SIMPLIFIED)

Training a large-scale deep net is a computationally expensive process, and common CPUs are generally insufficient for the task. GPUs are a great tool for ...

Small Deep Neural Networks - Their Advantages, and Their Design

Deep neural networks (DNNs) have led to significant improvements to the accuracy of machine-learning applications. For many problems, such as object ...

Lecture 5 | Convolutional Neural Networks

In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the ...

Lecture 13: Convolutional Neural Networks

Lecture 13 provides a mini tutorial on Azure and GPUs followed by research highlight "Character-Aware Neural Language Models." Also covered are CNN ...

Build an AI Writer - Machine Learning for Hackers #8

This video will get you up and running with your first AI Writer able to write a short story based on an image that you input. The code for this video is here: ...

The Next Generation of Neural Networks

Google Tech Talks November, 29 2007 In the 1980's, new learning algorithms for neural networks promised to solve difficult classification tasks, like speech or ...

Natural Language Processing: Crash Course Computer Science #36

Today we're going to talk about how computers understand speech and speak themselves. As computers play an increasing role in our daily lives there has ...

Lecture 4: Word Window Classification and Neural Networks

Lecture 4 introduces single and multilayer neural networks, and how they can be used for classification purposes. Key phrases: Neural networks. Forward ...