AI News, Machine Learning FAQ

Machine Learning FAQ

Index Let’s assume we are really into mountain climbing, and to add a little extra challenge, we cover eyes this time so that we can’t see where we are and when we accomplished our “objective,” that is, reaching the top of the mountain.

We approach this challenge by iteratively “feeling” around you and taking a step into the direction of the steepest ascent – let’s call it “gradient ascent.” But what do we do if we reach a point where we can’t ascent any further?

However, this is not specific to backpropagation but just one way to minimize a convex cost function (if there is only a global minima) or non-convex cost function (which has local minima like the “plateaus” that let us think we reached the mountain’s top).

And in backpropagation, we “simply” backpropagate the error (the “cost” that we compute by comparing the calculated output and the known, correct target output, which we then use to update the model parameters):

Machine Learning FAQ

Index Let’s assume we are really into mountain climbing, and to add a little extra challenge, we cover eyes this time so that we can’t see where we are and when we accomplished our “objective,” that is, reaching the top of the mountain.

We approach this challenge by iteratively “feeling” around you and taking a step into the direction of the steepest ascent – let’s call it “gradient ascent.” But what do we do if we reach a point where we can’t ascent any further?

However, this is not specific to backpropagation but just one way to minimize a convex cost function (if there is only a global minima) or non-convex cost function (which has local minima like the “plateaus” that let us think we reached the mountain’s top).

And in backpropagation, we “simply” backpropagate the error (the “cost” that we compute by comparing the calculated output and the known, correct target output, which we then use to update the model parameters):

Backpropagation

Backpropagation is a method used in artificial neural networks to calculate a gradient that is needed in the calculation of the weights to be used in the network.[1] It is commonly used to train deep neural networks,[2] a term referring to neural networks with more than one hidden layer.[3] Backpropagation is a special case of an older and more general technique called automatic differentiation.

In the context of learning, backpropagation is commonly used by the gradient descent optimization algorithm to adjust the weight of neurons by calculating the gradient of the loss function.

This technique is also sometimes called backward propagation of errors, because the error is calculated at the output and distributed back through the network layers.

The backpropagation algorithm has been repeatedly rediscovered and is equivalent to automatic differentiation in reverse accumulation mode[citation needed][clarification needed].

Backpropagation requires the derivative of the loss function with respect to the network output to be known, which typically (but not necessarily) means that a desired target value is known.

For this reason it is considered to be a supervised learning method, although it is used in some unsupervised networks such as autoencoders.

Backpropagation is also a generalization of the delta rule to multi-layered feedforward networks, made possible by using the chain rule to iteratively compute gradients for each layer.

It is closely related to the Gauss–Newton algorithm, and is part of continuing research in neural backpropagation.

Backpropagation can be used with any gradient-based optimizer, such as L-BFGS or truncated Newton[citation needed][clarification needed].

The goal of any supervised learning algorithm is to find a function that best maps a set of inputs to their correct output.

An example would be a classification task, where the input is an image of an animal, and the correct output is the name of the animal.

The motivation for backpropagation is to train a multi-layered neural network such that it can learn the appropriate internal representations to allow it to learn any arbitrary mapping of input to output.[4] Sometimes referred to as the cost function or error function (not to be confused with the Gauss error function), the loss function is a function that maps values of one or more variables onto a real number intuitively representing some 'cost' associated with those values.

For backpropagation, the loss function calculates the difference between the network output and its expected output, after a case propagates through the network.

Two assumptions must be made about the form of the error function.[5] The first is that it can be written as an average

=

1

n

∑

x

x

{\textstyle E={\frac {1}{n}}\sum _{x}E_{x}}

over error functions

x

{\textstyle E_{x}}

,

for

n

{\textstyle n}

individual training examples,

x

{\textstyle x}

.

The reason for this assumption is that the backpropagation algorithm calculates the gradient of the error function for a single training example, which needs to be generalized to the overall error function.

The second assumption is that it can be written as a function of the outputs from the neural network.

Let

y

,

y

′

{\displaystyle y,y'}

be vectors in

n

{\displaystyle \mathbb {R} ^{n}}

.

Select an error function

(

y

,

y

′

)

{\displaystyle E(y,y')}

measuring the difference between two outputs.

The standard choice is the square of the Euclidean distance between the vectors

y

{\displaystyle y}

and

y

′

{\displaystyle y'}

:

(

y

,

y

′

)

=

1

2

‖

y

−

y

′

‖

2

{\displaystyle E(y,y')={\tfrac {1}{2}}\lVert y-y'\rVert ^{2}}

Note that the factor of

1

2

{\displaystyle {\tfrac {1}{2}}}

conveniently cancels the exponent when the error function is subsequently differentiated.

The error function over

n

{\textstyle n}

training examples can simply be written as an average of losses over individual examples:

=

1

{\displaystyle E={\frac {1}{2n}}\sum _{x}\lVert (y(x)-y'(x))\rVert ^{2}}

and therefore, the partial derivative with respect to the outputs:

{\displaystyle {\frac {\partial E}{\partial y'}}=y'-y}

The optimization algorithm repeats a two phase cycle, propagation and weight update.

When an input vector is presented to the network, it is propagated forward through the network, layer by layer, until it reaches the output layer.

The output of the network is then compared to the desired output, using a loss function.

The resulting error value is calculated for each of the neurons in the output layer.

The error values are then propagated from the output back through the network, until each neuron has an associated error value that reflects its contribution to the original output.

Backpropagation uses these error values to calculate the gradient of the loss function.

In the second phase, this gradient is fed to the optimization method, which in turn uses it to update the weights, in an attempt to minimize the loss function.

be a neural network with

{\displaystyle e}

connections,

{\displaystyle m}

inputs, and

{\displaystyle n}

{\displaystyle x_{1},x_{2},\dots }

will denote vectors in

{\displaystyle \mathbb {R} ^{m}}

{\displaystyle y_{1},y_{2},\dots }

vectors in

{\displaystyle \mathbb {R} ^{n}}

{\displaystyle w_{0},w_{1},w_{2},\ldots }

vectors in

{\displaystyle \mathbb {R} ^{e}}

These are called inputs, outputs and weights respectively.

The neural network corresponds to a function

which, given a weight

{\displaystyle w}

maps an input

{\displaystyle x}

to an output

{\displaystyle y}

The optimization takes as input a sequence of training examples

{\displaystyle (x_{1},y_{1}),\dots ,(x_{p},y_{p})}

and produces a sequence of weights

{\displaystyle w_{0},w_{1},\dots ,w_{p}}

starting from some initial weight

{\displaystyle w_{0}}

usually chosen at random.

These weights are computed in turn: first compute

{\displaystyle w_{i}}

using only

{\displaystyle (x_{i},y_{i},w_{i-1})}

{\displaystyle i=1,\dots ,p}

The output of the algorithm is then

{\displaystyle w_{p}}

giving us a new function

{\displaystyle x\mapsto f_{N}(w_{p},x)}

The computation is the same in each step, hence only the case

{\displaystyle i=1}

{\displaystyle w_{1}}

{\displaystyle (x_{1},y_{1},w_{0})}

is done by considering a variable weight

{\displaystyle w}

and applying gradient descent to the function

{\displaystyle w\mapsto E(f_{N}(w,x_{1}),y_{1})}

to find a local minimum, starting at

{\displaystyle w=w_{0}}

{\displaystyle w_{1}}

the minimizing weight found by gradient descent.

To implement the algorithm above, explicit formulas are required for the gradient of the function

{\displaystyle w\mapsto E(f_{N}(w,x),y)}

where the function is

The learning algorithm can be divided into two phases: propagation and weight update.

Each propagation involves the following steps: For each weight, the following steps must be followed: This ratio (percentage) influences the speed and quality of learning;

it is called the learning rate.

The greater the ratio, the faster the neuron trains, but the lower the ratio, the more accurate the training is.

The sign of the gradient of a weight indicates whether the error varies directly with, or inversely to, the weight.

Therefore, the weight must be updated in the opposite direction, 'descending' the gradient.

Learning is repeated (on new batches) until the network performs adequately.

The following is pseudocode for a stochastic gradient descent algorithm for training a three-layer network (only one hidden layer): The lines labeled 'backward pass' can be implemented using the backpropagation algorithm, which calculates the gradient of the error of the network regarding the network's modifiable weights.[6] To understand the mathematical derivation of the backpropagation algorithm, it helps to first develop some intuitions about the relationship between the actual output of a neuron and the correct output for a particular training case.

Consider a simple neural network with two input units, one output unit and no hidden units.

Each neuron uses a linear output[note 1] that is the weighted sum of its input.

Initially, before training, the weights will be set randomly.

Then the neuron learns from training examples, which in this case consists of a set of tuples

{\displaystyle (x_{1},x_{2},t)}

{\displaystyle x_{1}}

{\displaystyle x_{2}}

are the inputs to the network and t is the correct output (the output the network should eventually produce given those inputs).

The initial network, given

{\displaystyle x_{1}}

{\displaystyle x_{2}}

will compute an output y that likely differs from t (given random weights).

A common method for measuring the discrepancy between the expected output t and the actual output y is the squared error measure: where E is the discrepancy or error.

As an example, consider the network on a single training case:

{\displaystyle (1,1,0)}

thus the input

{\displaystyle x_{1}}

{\displaystyle x_{2}}

are 1 and 1 respectively and the correct output, t is 0.

Now if the actual output y is plotted on the horizontal axis against the error E on the vertical axis, the result is a parabola.

The minimum of the parabola corresponds to the output y which minimizes the error E.

For a single training case, the minimum also touches the horizontal axis, which means the error will be zero and the network can produce an output y that exactly matches the expected output t.

Therefore, the problem of mapping inputs to outputs can be reduced to an optimization problem of finding a function that will produce the minimal error.

However, the output of a neuron depends on the weighted sum of all its inputs: where

{\displaystyle w_{1}}

{\displaystyle w_{2}}

are the weights on the connection from the input units to the output unit.

Therefore, the error also depends on the incoming weights to the neuron, which is ultimately what needs to be changed in the network to enable learning.

If each weight is plotted on a separate horizontal axis and the error on the vertical axis, the result is a parabolic bowl.

For a neuron with k weights, the same plot would require an elliptic paraboloid of

{\displaystyle k+1}

One commonly used algorithm to find the set of weights that minimizes the error is gradient descent.

Backpropagation is then used to calculate the steepest descent direction.

The gradient descent method involves calculating the derivative of the squared error function with respect to the weights of the network.

This is normally done using backpropagation.

Assuming one output neuron,[note 2] the squared error function is: where The factor of

{\displaystyle \textstyle {\frac {1}{2}}}

is included to cancel the exponent when differentiating.

Later, the expression will be multiplied with an arbitrary learning rate, so that it doesn't matter if a constant coefficient is introduced now.

For each neuron

{\displaystyle j}

its output

{\displaystyle o_{j}}

is defined as The input

{\displaystyle {\text{net}}_{j}}

to a neuron is the weighted sum of outputs

{\displaystyle o_{k}}

If the neuron is in the first layer after the input layer, the

{\displaystyle o_{k}}

of the input layer are simply the inputs

{\displaystyle x_{k}}

The number of input units to the neuron is

{\displaystyle n}

The variable

{\displaystyle w_{kj}}

denotes the weight between neurons

{\displaystyle k}

{\displaystyle j}

The activation function

{\displaystyle \varphi }

is non-linear and differentiable.

A commonly used activation function is the logistic function: which has a convenient derivative of: Calculating the partial derivative of the error with respect to a weight

{\displaystyle w_{ij}}

is done using the chain rule twice: In the last factor of the right-hand side of the above, only one term in the sum

{\displaystyle {\text{net}}_{j}}

depends on

{\displaystyle w_{ij}}

so that If the neuron is in the first layer after the input layer,

{\displaystyle o_{i}}

{\displaystyle x_{i}}

The derivative of the output of neuron

{\displaystyle j}

with respect to its input is simply the partial derivative of the activation function (assuming here that the logistic function is used): This is the reason why backpropagation requires the activation function to be differentiable.

(Nevertheless, the non-differentiable ReLU activation function has become quite popular recently, e.g.

in AlexNet) The first factor is straightforward to evaluate if the neuron is in the output layer, because then

{\displaystyle o_{j}=y}

{\displaystyle j}

is in an arbitrary inner layer of the network, finding the derivative

with respect to

{\displaystyle o_{j}}

as a function of the inputs of all neurons

{\displaystyle L={u,v,\dots ,w}}

receiving input from neuron

{\displaystyle j}

and taking the total derivative with respect to

{\displaystyle o_{j}}

a recursive expression for the derivative is obtained: Therefore, the derivative with respect to

{\displaystyle o_{j}}

can be calculated if all the derivatives with respect to the outputs

{\displaystyle o_{\ell }}

of the next layer – the one closer to the output neuron – are known.

Putting it all together: with To update the weight

{\displaystyle w_{ij}}

using gradient descent, one must choose a learning rate,

{\displaystyle \eta >0}

The change in weight needs to reflect the impact on

of an increase or decrease in

{\displaystyle w_{ij}}

{\displaystyle {\frac {\partial E}{\partial w_{ij}}}>0}

an increase in

{\displaystyle w_{ij}}

increases

conversely, if

{\displaystyle {\frac {\partial E}{\partial w_{ij}}}<0}

an increase in

{\displaystyle w_{ij}}

decreases

The new

{\displaystyle \Delta w_{ij}}

is added to the old weight, and the product of the learning rate and the gradient, multiplied by

{\displaystyle -1}

guarantees that

{\displaystyle w_{ij}}

changes in a way that always decreases

In other words, in the equation immediately below,

{\displaystyle -\eta {\frac {\partial E}{\partial w_{ij}}}}

always changes

{\displaystyle w_{ij}}

in such a way that

is decreased: For a single-layer network, this expression becomes the Delta Rule.[7] The choice of learning rate

{\textstyle \eta }

is important, since a high value can cause too strong a change, causing the minimum to be missed, while a too low learning rate slows the training unnecessarily.

Optimizations such as Quickprop are primarily aimed at speeding up error minimization;

other improvements mainly try to increase reliability.

In order to avoid oscillation inside the network such as alternating connection weights, and to improve the rate of convergence, refinements of this algorithm use an adaptive learning rate.[8] By using a variable inertia term (Momentum)

{\textstyle \alpha }

the gradient and the last change can be weighted such that the weight adjustment additionally depends on the previous change.

{\textstyle \alpha }

is equal to 0, the change depends solely on the gradient, while a value of 1 will only depend on the last change.

Similar to a ball rolling down a mountain, whose current speed is determined not only by the current slope of the mountain but also by its own inertia, inertia can be added:

{\displaystyle \Delta w_{ij}(t+1)=(1-\alpha )\eta \delta _{j}o_{i}+\alpha \,\Delta w_{ij}(t)}

where: Inertia depends on the current weight change

{\textstyle (t+1)}

both from the current gradient of the error function (slope of the mountain, 1st summand), as well as from the weight change from the previous point in time (inertia, 2nd summand).

With inertia, the problems of getting stuck (in steep ravines and flat plateaus) are avoided.

Since, for example, the gradient of the error function becomes very small in flat plateaus, inertia would immediately lead to a 'deceleration' of the gradient descent.

This deceleration is delayed by the addition of the inertia term so that a flat plateau can be escaped more quickly.

Two modes of learning are available: stochastic and batch.

In stochastic learning, each input creates a weight adjustment.

In batch learning weights are adjusted based on a batch of inputs, accumulating errors over the batch.

Stochastic learning introduces 'noise' into the gradient descent process, using the local gradient calculated from one data point;

this reduces the chance of the network getting stuck in local minima.

However, batch learning typically yields a faster, more stable descent to a local minimum, since each update is performed in the direction of the average error of the batch.

A common compromise choice is to use 'mini-batches', meaning small batches and with samples in each batch selected stochastically from the entire data set.

According to various sources,[11][12][13][14][15] the basics of continuous backpropagation were derived in the context of control theory by Henry J.

Bryson in 1961.[17] They used principles of dynamic programming.

In 1962, Stuart Dreyfus published a simpler derivation based only on the chain rule.[18] Bryson and Ho described it as a multi-stage dynamic system optimization method in 1969.[19][20] In 1970 Linnainmaa published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[21][22] This corresponds to backpropagation, which is efficient even for sparse networks.[14][15][23][24] In 1973 Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients.[25] In 1974 Werbos mentioned the possibility of applying this principle to artificial neural networks,[26] and in 1982 he applied Linnainmaa's AD method to neural networks in the way that is used today.[15][27] In 1986 Rumelhart, Hinton and Williams showed experimentally that this method can generate useful internal representations of incoming data in hidden layers of neural networks.[4][28] In 1993, Wan was the first[14] to win an international pattern recognition contest through backpropagation.[29] During the 2000s it fell out of favour, but returned in the 2010s, benefitting from cheap, powerful GPU-based computing systems.

What is the best visual explanation for the back propagation algorithm for neural networks?

Finding co-effs of an equation Imagine you are given an equation of the form along with a data set of values for x,y,z and the corresponding values of W( the result) and are tasked with finding the best fit values for a,b,c( the parameters) that will work for any values of x,y&

If you approach your task like a novice you’ll keep assigning random values for a,b and c in the hope that you will somehow eventually stumble upon the matching parameters that will work for the any set of values.

Repeat from step 2 using modified parameters unit the difference between simulated W’s and Ws from data set is minimal Finally, you’ll use the co-eff values discovered through this iterative process to generate predictions for data sets where we don’t have values of W.

And to discover an optimal hypothesis that can used to make predictions, we need to train it, and for this we start with a randomised hypothesis and progressively tune it, just like in the scientist's approach Training of a hypothesis, is same as the scientist approach, its iterative and it uses a feedback loop.

Once test prediction values are recorded, they are used along with corresponding actual prediction values (from the data set) to compute the error of the test data set predictions.

This whole process of computing error for test data is repeated all over again until the updated parameters settle at optimal values, making it a robust feedback loop.

Since, for all layers other than the input later, the input values are prediction outputs from the previous layer, the final prediction error is in a way is an accumulation of errors at every activation node.

Beginning Tutorial: Backpropagation and Gradient Descent¶

The derivative of $f(x) = x^4 - 3x^3 + 2$ is $f'(x) = 4x^3 - 9x^2$ .

So if we plug in our random point from above (x=4) into the first derivative of $f(x)$ we get $f'(4) = 4(4)^3 - 9(4)^2 = 112$.

So it looks like we can say that whenever the $f'(x)$ for a particular $x$ is positive, we should move to the left (decrease x) and whenever it's negative, we should move to the right (increase x).

I say proportional to because we want to control to what degree we move at each step, for example when we compute $f'(4)=112$, do we really want our new $x$ to be $x - 112 = -108$?

This means, if we randomly started at $f'(4)=112$ then our new $x$ will be $ = 4 - (0.001 * 112) = 3.888$.

$x_{new} = x - \alpha*f'(3.888) = 3.888 - (0.001 * 99.0436) = 3.79$ Nice, we're indeed moving to the left, closer to the minimum of $f(x)$, little by little.

The Unreasonable Effectiveness of Recurrent Neural Networks

Within a few dozen minutes of training my first baby model (with rather arbitrarily-chosen hyperparameters) started to generate very nice looking descriptions of images that were on the edge of making sense.

We’ll train RNNs to generate text character by character and ponder the question “how is that even possible?” By the way, together with this post I am also releasing code on Github that allows you to train character-level language models based on multi-layer LSTMs.

A few examples may make this more concrete: As you might expect, the sequence regime of operation is much more powerful compared to fixed networks that are doomed from the get-go by a fixed number of computational steps, and hence also much more appealing for those of us who aspire to build more intelligent systems.

You might be thinking that having sequences as inputs or outputs could be relatively rare, but an important point to realize is that even if your inputs/outputs are fixed vectors, it is still possible to use this powerful formalism to process them in a sequential manner.

On the right, a recurrent network generates images of digits by learning to sequentially add color to a canvas (Gregor et al.): The takeaway is that even if your data is not in form of sequences, you can still formulate and train powerful models that learn to process it sequentially.

If you’re more comfortable with math notation, we can also write the hidden state update as \( h_t = \tanh ( W_{hh} h_{t-1} + W_{xh} x_t ) \), where tanh is applied elementwise.

We initialize the matrices of the RNN with random numbers and the bulk of work during training goes into finding the matrices that give rise to desirable behavior, as measured with some loss function that expresses your preference to what kinds of outputs y you’d like to see in response to your input sequences x.

Here’s a diagram: For example, we see that in the first time step when the RNN saw the character “h” it assigned confidence of 1.0 to the next letter being “h”, 2.2 to letter “e”, -3.0 to “l”, and 4.1 to “o”.

Since the RNN consists entirely of differentiable operations we can run the backpropagation algorithm (this is just a recursive application of the chain rule from calculus) to figure out in what direction we should adjust every one of its weights to increase the scores of the correct targets (green bold numbers).

If you have a different physical investment are become in people who reduced in a startup with the way to argument the acquirer could see them just that you’re also the founders will part of users’ affords that and an alternation to the idea.

And if you have to act the big company too.” Okay, clearly the above is unfortunately not going to replace Paul Graham anytime soon, but remember that the RNN had to learn English completely from scratch and with a small dataset (including where you put commas, apostrophes and spaces).

In particular, setting temperature very near zero will give the most likely thing that Paul Graham might say: “is that they were all the same thing that was a startup is that they were all the same thing that was a startup is that they were all the same thing that was a startup is that they were all the same” looks like we’ve reached an infinite loop about startups.

There’s also quite a lot of structured markdown that the model learns, for example sometimes it creates headings, lists, etc.: Sometimes the model snaps into a mode of generating random but valid XML: The model completely makes up the timestamp, id, and so on.

We had to step in and fix a few issues manually but then you get plausible looking math, it’s quite astonishing: Here’s another sample: As you can see above, sometimes the model tries to generate latex diagrams, but clearly it hasn’t really figured them out.

This is an example of a problem we’d have to fix manually, and is likely due to the fact that the dependency is too long-term: By the time the model is done with the proof it has forgotten whether it was doing a proof or a lemma.

In particular, I took all the source and header files found in the Linux repo on Github, concatenated all of them in a single giant file (474MB of C code) (I was originally going to train only on the kernel but that by itself is only ~16MB).

This is usually a very amusing part: The model first recites the GNU license character by character, samples a few includes, generates some macros and then dives into the code: There are too many fun parts to cover- I could probably write an entire blog post on just this part.

Of course, you can imagine this being quite useful inspiration when writing a novel, or naming a new startup :) We saw that the results at the end of training can be impressive, but how does any of this work?

At 300 iterations we see that the model starts to get an idea about quotes and periods: The words are now also separated with spaces and the model starts to get the idea about periods at the end of a sentence.

Longer words have now been learned as well: Until at last we start to get properly spelled words, quotations, names, and so on by about iteration 2000: The picture that emerges is that the model first discovers the general word-space structure and then rapidly starts to learn the words;

In the visualizations below we feed a Wikipedia RNN model character data from the validation set (shown along the blue/green rows) and under every character we visualize (in red) the top 5 guesses that the model assigns for the next character.

Think about it as green = very excited and blue = not very excited (for those familiar with details of LSTMs, these are values between [-1,1] in the hidden state vector, which is just the gated and tanh’d LSTM cell state).

Below we’ll look at 4 different ones that I found and thought were interesting or interpretable (many also aren’t): Of course, a lot of these conclusions are slightly hand-wavy as the hidden state of the RNN is a huge, high-dimensional and largely distributed representation.

We can see that in addition to a large portion of cells that do not do anything interpretible, about 5% of them turn out to have learned quite interesting and interpretible algorithms: Again, what is beautiful about this is that we didn’t have to hardcode at any point that if you’re trying to predict the next character it might, for example, be useful to keep track of whether or not you are currently inside or outside of quote.

I’ve only started working with Torch/LUA over the last few months and it hasn’t been easy (I spent a good amount of time digging through the raw Torch code on Github and asking questions on their gitter to get things done), but once you get a hang of things it offers a lot of flexibility and speed.

Here’s a brief sketch of a few recent developments (definitely not complete list, and a lot of this work draws from research back to 1990s, see related work sections): In the domain of NLP/Speech, RNNs transcribe speech to text, perform machine translation, generate handwritten text, and of course, they have been used as powerful language models (Sutskever et al.) (Graves) (Mikolov et al.) (both on the level of characters and words).

For example, we’re seeing RNNs in frame-level video classification, image captioning (also including my own work and many others), video captioning and very recently visual question answering.

My personal favorite RNNs in Computer Vision paper is Recurrent Models of Visual Attention, both due to its high-level direction (sequential processing of images with glances) and the low-level modeling (REINFORCE learning rule that is a special case of policy gradient methods in Reinforcement Learning, which allows one to train models that perform non-differentiable computation (taking glances around the image in this case)).

I’m confident that this type of hybrid model that consists of a blend of CNN for raw perception coupled with an RNN glance policy on top will become pervasive in perception, especially for more complex tasks that go beyond classifying some objects in plain view.

One problem is that RNNs are not inductive: They memorize sequences extremely well, but they don’t necessarily always show convincing signs of generalizing in the correct way (I’ll provide pointers in a bit that make this more concrete).

This paper sketched a path towards models that can perform read/write operations between large, external memory arrays and a smaller set of memory registers (think of these as our working memory) where the computation happens.

Now, I don’t want to dive into too many details but a soft attention scheme for memory addressing is convenient because it keeps the model fully-differentiable, but unfortunately one sacrifices efficiency because everything that can be attended to is attended to (but softly).

Think of this as declaring a pointer in C that doesn’t point to a specific address but instead defines an entire distribution over all addresses in the entire memory, and dereferencing the pointer returns a weighted sum of the pointed content (that would be an expensive operation!).

If you’d like to play with training RNNs I hear good things about keras or passage for Theano, the code released with this post for Torch, or this gist for raw numpy code I wrote a while ago that implements an efficient, batched LSTM forward and backward pass.

Unfortunately, at about 46K characters I haven’t written enough data to properly feed the RNN, but the returned sample (generated with low temperature to get a more typical sample) is: Yes, the post was about RNN and how well it works, so clearly this works :).

Lecture 3 | Loss Functions and Optimization

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ...

TensorFlow Dev Summit 2018 - Livestream

TensorFlow Dev Summit 2018 All Sessions playlist → Live from Mountain View, CA! Join the TensorFlow team as they host the second ..

Mod-02 Lec-18 Perceptron Learning and Decision Boundaries

Pattern Recognition by Prof. C.A. Murthy & Prof. Sukhendu Das,Department of Computer Science and Engineering,IIT Madras.For more details on NPTEL visit ...