AI News, The Neural Network That Remembers

The Neural Network That Remembers

It’s pretty drinkable, but I wouldn’t mind if this beer was available.” Besides the overpowering bouquet of raspberries in this guy’s beer, this review is remarkable for another reason.

It was produced by a computer program instructed to hallucinate a review for a “fruit/vegetable beer.” Using a powerful artificial-intelligence tool called a recurrent neural network, the software that produced this passage isn’t even programmed to know what words are, much less to obey the rules of English syntax.

The neural network learns proper nouns like “Coors Light” and beer jargon like “lacing” and “snifter.” It learns to spell and to misspell, and to ramble just the right amount.

It knows to describe India pale ales as “hoppy,” stouts as “chocolatey,” and American lagers as “watery.” The neural network also learns more colorful words for lagers that we can’t put in print.

This particular neural network can also run in reverse, taking any review and recognizing the sentiment (star rating) and subject (type of beer).

Envisioning what has since become known as a Turing test, he proposed that if the computer could imitate a person so convincingly as to fool a human judge, you could reasonably deem it to be intelligent.

Asimov’s tales, written before the phrase “artificial intelligence” existed, feature cunning robots engaging in conversations, piloting vehicles, and even helping to govern society.

On the other side, more practically oriented researchers apply machine learning to various real-world tasks, guided more by experimentation than by mathematical theory.

However, breakthroughs in neural-network research have revolutionized computer vision and natural-language processing, rekindling the imaginations of the public, researchers, and industry.

That’s because, until recently, machine learning was dominated by methods with well-understood theoretical properties, whereas neural-network research relies more on experimentation.

Nevertheless, the capabilities of recurrent neural networks are undeniable and potentially open the door to the kinds of deeply interactive systems people have hoped for—or feared—for generations.

Neurons, as you might recall from high school biology class, are cells that fire off electrical signals or refrain from doing so depending on signals received from the other neurons attached to them.

To determine the intensity of an artificial neuron’s firing or, more properly, its activation, we calculate a weighted sum of the activations of all the neurons that feed into it.

To dodge this problem entirely and to simplify the computations, we typically arrange the neurons in layers, with each neuron in a layer connected to the neurons in the layer above, making for many more connections than neurons.

The ultimate output of the network—say,acategorization of the input image as depicting a cat, dog, or person—is read from the activations of the artificial neurons in the very top layer.

The hard part is training your neural network to produce something useful, which is to say, tinkering with the (perhaps millions of) weights corresponding to the connections between the artificial neurons.

That is, you repeatedly adjust the connection weights a small amount, bringing the output of the network incrementally closer to the ground truth.

While determining the correct updates to each of the weights can be tricky, we can calculate them efficiently with a well-known technique called backpropagation, which was developed roughly 30 years by David Rumelhart, Geoff Hinton, and Ronald Williams.

Early on, computer scientists built neural networks with just three tiers: the input layer, a single hidden layer, and the output layer.

At step 2, however, the hidden layers also receive activation flowing across time from the corresponding hidden layers from step 1.

Computer scientists have known for decades that recurrent neural networks are powerful tools, capable of performing any well-defined computational procedure.

It’s true, but there’s a huge gap between knowing that your tool can in theory be used to write some desired program and knowing exactly how to construct it.

We won’t go too far into the weeds describing memory cells here, but the basic idea is to provide the network with memory that persists longer than the immediately forgotten activations of simple artificial neurons.

Memory cells give the network a form of medium-term memory, in contrast to the ephemeral activations of a feed-forward net or the long-term knowledge recorded in the settings of the weights.

In collaboration with David Kale of the University of Southern California and Randall Wetzell of Children’s Hospital Los Angeles, we devised a recurrent neural network that could make diagnoses after processing sequences of observations taken in the hospital’s pediatric intensive-care unit.

The sequences consisted of 13 frequently but irregularly sampled clinical measurements, including heart rate, blood pressure, blood glucose levels, and measures of respiratory function.

The network proved able to recognize diverse conditions such as brain cancer, status asthmaticus (unrelenting asthma attacks), and diabetic ketoacidosis (a serious complication of diabetes where the body produces excess blood acids) with remarkable accuracy.

The promising results from our medical application demonstrate the power of recurrent neural networks to capture the meaningful signal in sequential data.

Success in this context really means getting someone to declare, “There’s no way a computer wrote that!” In this sense, the computer-science community is evaluating recurrent neural networks via a kind of Turing test.

Recently, researchers at Google DeepMind combined reinforcement learning with feed-forward neural networks to create a system that can beat human players at 31different video games.

The Unreasonable Effectiveness of Recurrent Neural Networks

Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

We’ll train RNNs to generate text character by character and ponder the question “how is that even possible?” By the way, together with this post I am also releasing code on Github that allows you to train character-level language models based on multi-layer LSTMs.

A few examples may make this more concrete: As you might expect, the sequence regime of operation is much more powerful compared to fixed networks that are doomed from the get-go by a fixed number of computational steps, and hence also much more appealing for those of us who aspire to build more intelligent systems.

You might be thinking that having sequences as inputs or outputs could be relatively rare, but an important point to realize is that even if your inputs/outputs are fixed vectors, it is still possible to use this powerful formalism to process them in a sequential manner.

On the right, a recurrent network generates images of digits by learning to sequentially add color to a canvas (Gregor et al.): The takeaway is that even if your data is not in form of sequences, you can still formulate and train powerful models that learn to process it sequentially.

If you’re more comfortable with math notation, we can also write the hidden state update as \( h_t = \tanh ( W_{hh} h_{t-1} + W_{xh} x_t ) \), where tanh is applied elementwise.

We initialize the matrices of the RNN with random numbers and the bulk of work during training goes into finding the matrices that give rise to desirable behavior, as measured with some loss function that expresses your preference to what kinds of outputs y you’d like to see in response to your input sequences x.

Here’s a diagram: For example, we see that in the first time step when the RNN saw the character “h” it assigned confidence of 1.0 to the next letter being “h”, 2.2 to letter “e”, -3.0 to “l”, and 4.1 to “o”.

Since the RNN consists entirely of differentiable operations we can run the backpropagation algorithm (this is just a recursive application of the chain rule from calculus) to figure out in what direction we should adjust every one of its weights to increase the scores of the correct targets (green bold numbers).

If you have a different physical investment are become in people who reduced in a startup with the way to argument the acquirer could see them just that you’re also the founders will part of users’ affords that and an alternation to the idea.

And if you have to act the big company too.” Okay, clearly the above is unfortunately not going to replace Paul Graham anytime soon, but remember that the RNN had to learn English completely from scratch and with a small dataset (including where you put commas, apostrophes and spaces).

In particular, setting temperature very near zero will give the most likely thing that Paul Graham might say: “is that they were all the same thing that was a startup is that they were all the same thing that was a startup is that they were all the same thing that was a startup is that they were all the same” looks like we’ve reached an infinite loop about startups.

There’s also quite a lot of structured markdown that the model learns, for example sometimes it creates headings, lists, etc.: Sometimes the model snaps into a mode of generating random but valid XML: The model completely makes up the timestamp, id, and so on.

We had to step in and fix a few issues manually but then you get plausible looking math, it’s quite astonishing: Here’s another sample: As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

This is an example of a problem we’d have to fix manually, and is likely due to the fact that the dependency is too long-term: By the time the model is done with the proof it has forgotten whether it was doing a proof or a lemma.

In particular, I took all the source and header files found in the Linux repo on Github, concatenated all of them in a single giant file (474MB of C code) (I was originally going to train only on the kernel but that by itself is only ~16MB).

This is usually a very amusing part: The model first recites the GNU license character by character, samples a few includes, generates some macros and then dives into the code: There are too many fun parts to cover- I could probably write an entire blog post on just this part.

Of course, you can imagine this being quite useful inspiration when writing a novel, or naming a new startup :) We saw that the results at the end of training can be impressive, but how does any of this work?

At 300 iterations we see that the model starts to get an idea about quotes and periods: The words are now also separated with spaces and the model starts to get the idea about periods at the end of a sentence.

Longer words have now been learned as well: Until at last we start to get properly spelled words, quotations, names, and so on by about iteration 2000: The picture that emerges is that the model first discovers the general word-space structure and then rapidly starts to learn the words;

In the visualizations below we feed a Wikipedia RNN model character data from the validation set (shown along the blue/green rows) and under every character we visualize (in red) the top 5 guesses that the model assigns for the next character.

Think about it as green = very excited and blue = not very excited (for those familiar with details of LSTMs, these are values between [-1,1] in the hidden state vector, which is just the gated and tanh’d LSTM cell state).

Below we’ll look at 4 different ones that I found and thought were interesting or interpretable (many also aren’t): Of course, a lot of these conclusions are slightly hand-wavy as the hidden state of the RNN is a huge, high-dimensional and largely distributed representation.

We can see that in addition to a large portion of cells that do not do anything interpretible, about 5% of them turn out to have learned quite interesting and interpretible algorithms: Again, what is beautiful about this is that we didn’t have to hardcode at any point that if you’re trying to predict the next character it might, for example, be useful to keep track of whether or not you are currently inside or outside of quote.

I’ve only started working with Torch/LUA over the last few months and it hasn’t been easy (I spent a good amount of time digging through the raw Torch code on Github and asking questions on their gitter to get things done), but once you get a hang of things it offers a lot of flexibility and speed.

Here’s a brief sketch of a few recent developments (definitely not complete list, and a lot of this work draws from research back to 1990s, see related work sections): In the domain of NLP/Speech, RNNs transcribe speech to text, perform machine translation, generate handwritten text, and of course, they have been used as powerful language models (Sutskever et al.) (Graves) (Mikolov et al.) (both on the level of characters and words).

For example, we’re seeing RNNs in frame-level video classification, image captioning (also including my own work and many others), video captioning and very recently visual question answering.

My personal favorite RNNs in Computer Vision paper is Recurrent Models of Visual Attention, both due to its high-level direction (sequential processing of images with glances) and the low-level modeling (REINFORCE learning rule that is a special case of policy gradient methods in Reinforcement Learning, which allows one to train models that perform non-differentiable computation (taking glances around the image in this case)).

I’m confident that this type of hybrid model that consists of a blend of CNN for raw perception coupled with an RNN glance policy on top will become pervasive in perception, especially for more complex tasks that go beyond classifying some objects in plain view.

One problem is that RNNs are not inductive: They memorize sequences extremely well, but they don’t necessarily always show convincing signs of generalizing in the correct way (I’ll provide pointers in a bit that make this more concrete).

This paper sketched a path towards models that can perform read/write operations between large, external memory arrays and a smaller set of memory registers (think of these as our working memory) where the computation happens.

Now, I don’t want to dive into too many details but a soft attention scheme for memory addressing is convenient because it keeps the model fully-differentiable, but unfortunately one sacrifices efficiency because everything that can be attended to is attended to (but softly).

Think of this as declaring a pointer in C that doesn’t point to a specific address but instead defines an entire distribution over all addresses in the entire memory, and dereferencing the pointer returns a weighted sum of the pointed content (that would be an expensive operation!).

If you’d like to play with training RNNs I hear good things about keras or passage for Theano, the code released with this post for Torch, or this gist for raw numpy code I wrote a while ago that implements an efficient, batched LSTM forward and backward pass.

Unfortunately, at about 46K characters I haven’t written enough data to properly feed the RNN, but the returned sample (generated with low temperature to get a more typical sample) is: Yes, the post was about RNN and how well it works, so clearly this works :).

Types of artificial neural networks

Artificial neural networks are computational models inspired by biological neural networks, and are used to approximate functions that are generally unknown.

Particularly, they are inspired by the behaviour of neurons and the electrical signals they convey between input (such as from the eyes or nerve endings in the hand), processing, and output from the brain (such as reacting to light, touch, or heat).

Most artificial neural networks bear only some resemblance to their more complex biological counterparts, but are very effective at their intended tasks (e.g.

Neural networks can be hardware- (neurons are represented by physical components) or software-based (computer models), and can use a variety of topologies and learning algorithms.

An autoencoder, autoassociator or Diabolo network[5]:19 is similar to the multilayer perceptron (MLP) – with an input layer, an output layer and one or more hidden layers connecting them.

Then, using PDF of each class, the class probability of a new input is estimated and Bayes’ rule is employed to allocate it to the class with the highest posterior probability.[10]

time delay neural network (TDNN) is a feedforward architecture for sequential data that recognizes features independent of sequence position.

In order to achieve time-shift invariance, delays are added to the input so that multiple data points (points in time) are analyzed together.

It has been implemented using a perceptron network whose connection weights were trained with back propagation (supervised learning).[13]

In a convolutional neural network (CNN, or ConvNet or shift invariant or space invariant.[14][15]) the unit connectivity pattern is inspired by the organization of the visual cortex.

Regulatory feedback networks started as a model to explain brain phenomena found during recognition including network-wide bursting and difficulty with similarity found universally in sensory recognition.[20]

This approach can also perform mathematically equivalent classification as feedforward methods and is used as a tool to create and modify networks.[21][22]

In regression problems the output layer is a linear combination of hidden layer values representing mean predicted output.

In classification problems the output layer is typically a sigmoid function of a linear combination of hidden layer values, representing a posterior probability.

In classification problems the fixed non-linearity introduced by the sigmoid output function is most efficiently dealt with using iteratively re-weighted least squares.

A common solution is to associate each data point with its own centre, although this can expand the linear system to be solved in the final layer and requires shrinkage techniques to avoid overfitting.

All three approaches use a non-linear kernel function to project the input data into a space where the learning problem can be solved using a linear model.

Alternatively, if 9-NN classification is used and the closest 9 points are considered, then the effect of the surrounding 8 positive points may outweigh the closest 9 (negative) point.

The Euclidean distance is computed from the new point to the center of each neuron, and a radial basis function (RBF) (also called a kernel function) is applied to the distance to compute the weight (influence) for each neuron.

It determines when to stop adding neurons to the network by monitoring the estimated leave-one-out (LOO) error and terminating when the LOO error begins to increase because of overfitting.

For supervised learning in discrete time settings, training sequences of real-valued input vectors become sequences of activations of the input nodes, one input vector at a time.

At each time step, each non-input unit computes its current activation as a nonlinear function of the weighted sum of the activations of all units from which it receives connections.

For example, if the input sequence is a speech signal corresponding to a spoken digit, the final target output at the end of the sequence may be a label classifying the digit.

To minimize total error, gradient descent can be used to change each weight in proportion to its derivative with respect to the error, provided the non-linear activation functions are differentiable.

A major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[32][33]

Instead a fitness function or reward function or utility function is occasionally used to evaluate performance, which influences its input stream through output units connected to actuators that affect the environment.

At each time step, the input is propagated in a standard feedforward fashion, and then a backpropagation-like learning rule is applied (not performing gradient descent).

The fixed back connections leave a copy of the previous values of the hidden units in the context units (since they propagate over the connections before the learning rule is applied).

This realization gave birth to the concept of modular neural networks, in which several small networks cooperate or compete to solve problems.

Because neural networks suffer from local minima, starting with the same architecture and training but using randomly different initial weights often gives vastly different results.[citation needed]

The CoM is similar to the general machine learning bagging method, except that the necessary variety of machines in the committee is obtained by training from different starting weights rather than training on different randomly selected subsets of the training data.

The associative neural network (ASNN) is an extension of committee of machines that combines multiple feedforward neural networks and the k-nearest neighbor technique.

If new data become available, the network instantly improves its predictive ability and provides data approximation (self-learns) without retraining.

Another important feature of ASNN is the possibility to interpret neural network results by analysis of correlations between data cases in the space of models.[44]

SNNs and the temporal correlations of neural assemblies in such networks—have been used to model figure/ground separation and region linking in the visual system.

It uses multiple types of units, (originally two, called simple and complex cells), as a cascading model for use in pattern recognition tasks.[49][50][51]

Dynamic neural networks address nonlinear multivariate behaviour and include (learning of) time-dependent behaviour, such as transient phenomena and delay effects.

The Cascade-Correlation architecture has several advantages: It learns quickly, determines its own size and topology, retains the structures it has built even if the training set changes and requires no backpropagation.

Since they are compositions of functions, CPPNs in effect encode images at infinite resolution and can be sampled for a particular display at whatever resolution is optimal.

It is done by creating a specific memory structure, which assigns each new pattern to an orthogonal plane using adjacently connected hierarchical arrays.[57]

HTM is a method for discovering and inferring the high-level causes of observed input patterns and sequences, thus building an increasingly complex model of the world.

HTM combines and extends approaches used in Bayesian networks, spatial and temporal clustering algorithms, while using a tree-shaped hierarchy of nodes that is common in neural networks.

Learning to learn and compositionality with deep recurrent neural networks

Author: Nando de Freitas, Department of Computer Science, University of Oxford Abstract: Deep neural network representations play an important role in ...

Neural Network Tries to Generate English Speech (RNN/LSTM)

By popular demand, I threw my own voice into a neural network (3 times) and got it to recreate what it had learned along the way! This is 3 different recurrent ...

Neural Network Learns to Generate Voice (RNN/LSTM)

[VOLUME WARNING] This is what happens when you throw raw audio (which happens to be a cute voice) into a neural network and then tell it to spit out what ...

Recurrent Neural Network - The Math of Intelligence (Week 5)

Recurrent neural networks let us learn from sequential data (time series, music, audio, video frames, etc ). We're going to build one from scratch in numpy ...

Recurrent Neural Network Writes Music and Shakespeare Novels | Two Minute Papers #19

Artificial neural networks are powerful machine learning techniques that can learn to recognize images or paint in the style of Van Gogh. Recurrent neural ...

Computer evolves to generate baroque music!

I put the word "evolve" in there because you guys like "evolution" videos, but this computer is actually learning with gradient descent! All music in this video is ...

Lecture 10 | Recurrent Neural Networks

In Lecture 10 we discuss the use of recurrent neural networks for modeling sequence data. We show how recurrent neural networks can be used for language ...

Gradient descent, how neural networks learn | Chapter 2, deep learning

Subscribe for more (part 3 will be on backpropagation): Thanks to everybody supporting on Patreon

Prof. Yoshua Bengio - Recurrent Neural Networks (RNNs)

Link to slides: Yoshua Bengio is a Canadian computer scientist, ..

MIT 6.S094: Recurrent Neural Networks for Steering Through Time

This is lecture 4 of course 6.S094: Deep Learning for Self-Driving Cars taught in Winter 2017. Course website: Lecture 4 slides: ..