AI News, Long Short-Term Memory dramatically improves Google Voice etc – now available to a billion users

Long Short-Term Memory dramatically improves Google Voice etc – now available to a billion users

architecture initially developed in Juergen Schmidhuber’s research groups at the Swiss AI Lab IDSIA and TU Munich greatly improved Google Voice (by 49%) and are now available to a billion users.

You can find the recent Google Research Blog on this, by Haşim Sak, Andrew Senior, Kanishka Rao, Françoise Beaufays and Johan Schalkwyk: http://googleresearch.blogspot.ch/2015/09/google-voice-search-faster-and-more.html This is a speech recognition application of “Long Short-Term Memory (LSTM)” Recurrent Neural Networks (since 1997) [1] with “forget gates”

Google is using the LSTMs also for numerous other applications such as state-of-the-art machine translation [4], image caption generation [5], natural language processing, etc.

Recurrent neural network

A recurrent neural network (RNN) is a class of artificial neural network where connections between units form a directed graph along a sequence.

This makes them applicable to tasks such as unsegmented, connected handwriting recognition[1] or speech recognition.[2][3] Recurrent neural networks are used somewhat indiscriminately about two broad classes of networks with a similar general structure, where one is finite impulse and the other is infinite impulse.

Both classes of networks exhibit temporal dynamic behavior.[4] A finite impulse recurrent network is a directed acyclic graph that can be unrolled and replaced with a strictly feedforward neural network, while an infinite impulse recurrent network is a directed cyclic graph that can not be unrolled.

In 1993, a neural history compressor system solved a 'Very Deep Learning' task that required more than 1000 subsequent layers in an RNN unfolded in time.[5] Long short-term memory (LSTM) networks were invented by Hochreiter and Schmidhuber in 1997 and set accuracy records in multiple applications domains.[6] Around 2007, LSTM started to revolutionize speech recognition, outperforming traditional models in certain speech applications.[7] In 2009, a Connectionist Temporal Classification (CTC)-trained LSTM network was the first RNN to win pattern recognition contests when it won several competitions in connected handwriting recognition.[8][9] In 2014, the Chinese search giant Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark without using any traditional speech processing methods.[10] LSTM also improved large-vocabulary speech recognition[2][3] and text-to-speech synthesis[11] and was used in Google Android.[8][12] In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49%[citation needed] through CTC-trained LSTM, which was used by Google voice search.[13] LSTM broke records for improved machine translation,[14] Language Modeling[15] and Multilingual Language Processing.[16] LSTM combined with convolutional neural networks (CNNs) improved automatic image captioning.[17] RNNs come in many variants.

BAM network has two layers, either of which can be driven as an input to recall an association and produce an output on the other layer.[25] An Elman network is a three-layer network (arranged horizontally as x, y, and z in the illustration) with the addition of a set of 'context units' (u in the illustration).

The system effectively minimises the description length or the negative logarithm of the probability of the data.[32] Given a lot of learnable predictability in the incoming data sequence, the highest level RNN can use supervised learning to easily classify even deep sequences with long intervals between important events.

It is possible to distill the RNN hierarchy into two RNNs: the 'conscious' chunker (higher level) and the 'subconscious' automatizer (lower level).[31] Once the chunker has learned to predict and compress inputs that are unpredictable by the automatizer, then the automatizer can be forced in the next learning phase to predict or imitate through additional units the hidden units of the more slowly changing chunker.

They are used in the full form and several simplified variants.[42][43] Their performance on polyphonic music modeling and speech signal modeling was found to be similar to that of long short-term memory.[44] They have fewer parameters than LSTM, as they lack an output gate.[45] Bi-directional RNNs use a finite sequence to predict or label each element of the sequence based on the element's past and future contexts.

the rate of change of activation is given by: Where: CTRNNs have been applied to evolutionary robotics where they have been used to address vision,[48] co-operation,[49] and minimal cognitive behaviour.[50] Note that, by the Shannon sampling theorem, discrete time recurrent neural networks can be viewed as continuous-time recurrent neural networks where the differential equations have transformed into equivalent difference equations.

multiple timescales recurrent neural network (MTRNN) is a neural-based computational model that can simulate the functional hierarchy of the brain through self-organization that depends on spatial connection between neurons and on distinct types of neuron activities, each with distinct time properties.[53][54] With such varied neuronal activities, continuous sequences of any set of behaviors are segmented into reusable primitives, which in turn are flexibly integrated into diverse sequential behaviors.

The biological approval of such a type of hierarchy was discussed in the memory-prediction theory of brain function by Hawkins in his book On Intelligence.[citation needed] Neural Turing machines (NTMs) are a method of extending recurrent neural networks by coupling them to external memory resources which they can interact with by attentional processes.

The combined system is analogous to a Turing machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent.[55] Differentiable neural computers (DNCs) are an extension of Neural Turing machines, allowing for usage of fuzzy amounts of each memory address and a record of chronology.

Biological neural networks appear to be local with respect to both time and space.[61][62] For recursively computing the partial derivatives, RTRL has a time-complexity of O(number of hidden x number of weights) per time step for computing the Jacobian matrices, while BPTT only takes O(number of weights) per time step, at the cost of storing all forward activations within the given time horizon.[63] An online hybrid between BPTT and RTRL with intermediate complexity exists,[64][65] along with variants for continuous time.[66] A

major problem with gradient descent for standard RNN architectures is that error gradients vanish exponentially quickly with the size of the time lag between important events.[33][67] LSTM combined with a BPTT/RTRL hybrid learning method attempts to overcome these problems.[6] The on-line algorithm called causal recursive backpropagation (CRBP), implements and combines BPTT and RTRL paradigms for locally recurrent networks.[68] It works with the most general locally recurrent networks.

One approach to the computation of gradient information in RNNs with arbitrary architectures is based on signal-flow graphs diagrammatic derivation.[69] It uses the BPTT batch algorithm, based on Lee's theorem for network sensitivity calculations.[70] It was proposed by Wan and Beaufays, while its fast online version was proposed by Campolucci, Uncini and Piazza.[70] Training the weights in a neural network can be modeled as a non-linear global optimization problem.

The most common global optimization method for training RNNs is genetic algorithms, especially in unstructured networks.[71][72][73] Initially, the genetic algorithm is encoded with the neural network weights in a predefined manner where one gene in the chromosome represents one weight link.The whole network is represented as a single chromosome.

Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.

Long short-term memory

Long short-term memory (LSTM) units (or blocks) are a building unit for layers of a recurrent neural network (RNN).

A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate.

Each of the three gates can be thought of as a 'conventional' artificial neuron, as in a multi-layer (or feedforward) neural network: that is, they compute an activation (using an activation function) of a weighted sum.

The expression long short-term refers to the fact that LSTM is a model for the short-term memory which can last for a long period of time.

An LSTM is well-suited to classify, process and predict time series given time lags of unknown size and duration between important events.

Relative insensitivity to gap length gives an advantage to LSTM over alternative RNNs, hidden Markov models and other sequence learning methods in numerous applications[citation needed].

LSTM was proposed in 1997 by Sepp Hochreiter and Jürgen Schmidhuber[1] and improved in 2000 by Felix Gers' team.[2] Among other successes, LSTM achieved record results in natural language text compression,[3] unsegmented connected handwriting recognition[4] and won the ICDAR handwriting competition (2009).

LSTM networks were a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset (2013).[5] As of 2016, major technology companies including Google, Apple, and Microsoft were using LSTM as fundamental components in new products.[6] For example, Google used LSTM for speech recognition on the smartphone,[7][8] for the smart assistant Allo[9] and for Google Translate.[10][11] Apple uses LSTM for the 'Quicktype' function on the iPhone[12][13] and for Siri.[14] Amazon uses LSTM for Amazon Alexa.[15] In 2017 Microsoft reported reaching 95.1% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words.

A common architecture is composed of a memory cell, an input gate, an output gate and a forget gate.

In this way, when an LSTM network (that is an RNN composed of LSTM units) is trained with backpropagation through time, the gradient does not tend to vanish.

Intuitively, the input gate controls the extent to which a new value flows into the cell, the forget gate controls the extent to which a value remains in the cell and the output gate controls the extent to which the value in the cell is used to compute the output activation of the LSTM unit.

The weights of these connections, which need to be learned during training, of an LSTM unit are used to direct the operation of the gates.

a peephole LSTM).[17][18] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[20]

To minimize LSTM's total error on a set of training sequences, iterative gradient descent such as backpropagation through time can be used to change each weight in proportion to its derivative with respect to the error.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events.

is smaller than 1.[22][23] With LSTM units, however, when error values are back-propagated from the output, the error remains in the unit's memory.

LSTM can also be trained by a combination of artificial evolution for weights to the hidden units, and pseudo-inverse or support vector machines for weights to the output units.[24] In reinforcement learning applications LSTM can be trained by policy gradient methods, evolution strategies or genetic algorithms[citation needed].

Many applications use stacks of LSTM RNNs[25] and train them by connectionist temporal classification (CTC)[26] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences.

Applications of LSTM include: LSTM has Turing completeness in the sense that given enough network units it can compute any result that a conventional computer can compute, provided it has the proper weight matrix, which may be viewed as its program[citation needed][further explanation needed].

Deep Learning RNNaissance with Dr. Juergen Schmidhuber

Deep Learning RNNaissance Machine learning and pattern recognition are currently being revolutionised by "Deep Learning" (DL) Neural Networks (NNs). This is of commercial interest (for example,...

Deep Learning RNNaissance

On Thursday, August 14, 2014, Professor Jürgen Schmidhuber of IDSIA in Switzerland about his team's work on deep learning and recurrent neural networks. Read the full abstract, bio, and talk...

AI: Big Expectations (Jürgen Schmidhuber, President at IDSIA) | DLD16

Jürgen Schmidhuber will provide a comprehensive framework for thinking about artificial intelligence (AI). He will review past developments in artificial intelligence, and discuss its current...

Deep Learning of Representations

Google Tech Talk 11/13/2012 Presented by Yoshua Bengio ABSTRACT Yoshua Bengio will give an introduction to the area of Deep Learning, to which he has been one of the leading contributors....

Vladimir Vapnik (Columbia University and Facebook): Intelligent Mechanisms of Learning

Vladimir Vapnik (Columbia University and Facebook): Intelligent Mechanisms of Learning Southern California Machine Learning Symposium May 20, 2016.