AI News, Satellite Image Segmentation: a Workflow with U-Net

Satellite Image Segmentation: a Workflow with U-Net

Image Segmentation is a topic of machine learning where one needs to not only categorize what’s seen in an image, but to also do it on a per-pixel level.

The thing here is that despite competition winners shared some code to reproduce exactly their winning submission (which was released after I started working on my pipeline), this does not include a lot of the required things to be able to come up with that code in the first place if one want to apply the pipeline to another problem or dataset.

Moreover, building neural networks is an iterative process where one needs to start somewhere and slowly increase a metric / evaluation score by modifying the neural network architecture and hyperparameters (configuration).

First off, here is a glance at the training data of the competition: The DSTL’s Satellite Imagery Feature Detection Challenge is a challenge where participants need to code a model capable of doing those predictions — the images just above are taken from the dataset, it represents an (X, Y) pair example from the training data.

If red, green and blue represents 3 bands (RGB), the 20 bands contain a lot more information for the neural network to ponder upon, and hence ease learning and the quality of the predictions: it is aware of visual features that humans do not see naturally.

U-Net is like a convolutional autoencoder, But it also has skip-like connections with the feature maps located before the bottleneck (compressed embedding) layer, in such a way that in the decoder part some information comes from previous layers, bypassing the compressive bottleneck.

See the figure below, taken from the official paper of the U-Net: Thus, in the decoder, data is not only recovered from a compression, but is also concatenated with the information’s state before it was passed into the compression bottleneck so as to augment context for the next decoding layers to come.

That way, the neural networks still learns to generalize in the compressed latent representation (located at the bottom of the “U” shape in the figure), but also recovers its latent generalizations to a spatial representation with the proper per-pixel semantic alignment in the right part of the U of the U-Net.

The 3rd place winners used the 4 possible 90 degrees rotations, as well as using a mirrored version of those rotation, which can increase the training data 8-fold: this data transformation belongs to the D4 Dihedral group.

The four extra channels (from indexes) are shown below, along with the original image’s RGB human-visible channels for comparison: Here is the U-Net architecture from the 3rd place winners: The only open-source code we found online from winners was from the 3rd place winners.

I started working on the project before the winners’ official code was available publicly, so I made my own development and production pipeline going in that direction, which reveals useful not only for solving the problem, but to code a neural network that can eventually be transferred on other datasets.

The architecture I managed to develop was first derived from public open-source code pre-release before the end of the competition: https://www.kaggle.com/ceperaang/lb-0-42-ultimate-full-solution-run-on-your-hw It seems that a lot of participants developed their architectures on top of precisely this shared code, which has both been very helpful and acted as a bar raiser for participants of the competition to keep up in the leaderboard.

In the following image, our “U”-Net is flipped like a “⊂”-Net, it is automatically made from Keras’ visualization tool, hence why it seems skewed: Hyperopt is used here to automatically figure out the best neural network architecture to best fit the data of the given problem, so this humongous neural network has been grown automatically.

Then running hyperopt takes time, but it proceeds to do what could be compared to using genetic algorithms to perform breeding and natural selection, except that there is no breeding here: just a readjustment from the past trials to try new trials in a way that balances exploration of new architectures versus optimization of the architecture near local maximas of performance.

Once searched, an hyperspace can be refined to narrow down the range in which hyperparameters are tested in the event of having to restart the meta-optimization — and in case some hyperparameters were already too narrow (e.g.: best points are near the limit of the parameter), it is still possible to widen the range.

A team workflow can be interesting where beginners learn to manipulate the data and to launch the optimisation to slowly start modifying the hyperparameter space and eventually add new parameters based on new research.

More details here: https://github.com/Vooban/Smoothly-Blend-Image-Patches.Star Fork Running the improved 3rd place winners’ code, it is possible to get a score worth of the 2nd position because they coded their neural networks from scratch after the competition to make the code public and more useable.

I used the 3rd place winners’ post processing code, which rounds the prediction masks in a smoother way and which corrects a few bugs such as removing building predictions while water is also predicted for a given pixel, and on, but the neural network behind that is still quite custom.

On my side, I have used one neural architecture optimized with hyperopt on all classes at once, to then take this already-evolved architecture to train it on single classes at a time, so there is an imbalance in the bias/variance of the neural networks, each used on a single task rather than on all tasks at once as when meta-optimized.

The One Hundred Layers Tiramisu might not help for fitting on the DSTL’s dataset according to the 3rd place winner, but at least it has a lot of capacity potential in being applied to larger and richer datasets due to the use of the recently discovered densely connected convolutional blocks.

Using neural nets to recognize handwritten digits

Simple intuitions about how we recognize shapes - 'a 9 has a loop at the top, and a vertical stroke in the bottom right' - turn out to be not so simple to express algorithmically.

As a prototype it hits a sweet spot: it's challenging - it's no small feat to recognize handwritten digits - but it's not so difficult as to require an extremely complicated solution, or tremendous computational power.

But along the way we'll develop many key ideas about neural networks, including two important types of artificial neuron (the perceptron and the sigmoid neuron), and the standard learning algorithm for neural networks, known as stochastic gradient descent.

Today, it's more common to use other models of artificial neurons - in this book, and in much modern work on neural networks, the main neuron model used is one called the sigmoid neuron.

A perceptron takes several binary inputs, $x_1, x_2, \ldots$, and produces a single binary output: In the example shown the perceptron has three inputs, $x_1, x_2, x_3$.

The neuron's output, $0$ or $1$, is determined by whether the weighted sum $\sum_j w_j x_j$ is less than or greater than some threshold value.

To put it in more precise algebraic terms: \begin{eqnarray} \mbox{output} & = & \left\{ \begin{array}{ll} 0 & \mbox{if } \sum_j w_j x_j \leq \mbox{ threshold} \\ 1 & \mbox{if } \sum_j w_j x_j > \mbox{ threshold} \end{array} \right.

And it should seem plausible that a complex network of perceptrons could make quite subtle decisions: In this network, the first column of perceptrons - what we'll call the first layer of perceptrons - is making three very simple decisions, by weighing the input evidence.

The first change is to write $\sum_j w_j x_j$ as a dot product, $w \cdot x \equiv \sum_j w_j x_j$, where $w$ and $x$ are vectors whose components are the weights and inputs, respectively.

Using the bias instead of the threshold, the perceptron rule can be rewritten: \begin{eqnarray} \mbox{output} = \left\{ \begin{array}{ll} 0 & \mbox{if } w\cdot x + b \leq 0 \\ 1 & \mbox{if } w\cdot x + b > 0 \end{array} \right.

This requires computing the bitwise sum, $x_1 \oplus x_2$, as well as a carry bit which is set to $1$ when both $x_1$ and $x_2$ are $1$, i.e., the carry bit is just the bitwise product $x_1 x_2$: To get an equivalent network of perceptrons we replace all the NAND gates by perceptrons with two inputs, each with weight $-2$, and an overall bias of $3$.

Note that I've moved the perceptron corresponding to the bottom right NAND gate a little, just to make it easier to draw the arrows on the diagram: One notable aspect of this network of perceptrons is that the output from the leftmost perceptron is used twice as input to the bottommost perceptron.

(If you don't find this obvious, you should stop and prove to yourself that this is equivalent.) With that change, the network looks as follows, with all unmarked weights equal to -2, all biases equal to 3, and a single weight of -4, as marked: Up to now I've been drawing inputs like $x_1$ and $x_2$ as variables floating to the left of the network of perceptrons.

In fact, it's conventional to draw an extra layer of perceptrons - the input layer - to encode the inputs: This notation for input perceptrons, in which we have an output, but no inputs, is a shorthand.

Then the weighted sum $\sum_j w_j x_j$ would always be zero, and so the perceptron would output $1$ if $b > 0$, and $0$ if $b \leq 0$.

Instead of explicitly laying out a circuit of NAND and other gates, our neural networks can simply learn to solve problems, sometimes problems where it would be extremely difficult to directly design a conventional circuit.

If it were true that a small change in a weight (or bias) causes only a small change in output, then we could use this fact to modify the weights and biases to get our network to behave more in the manner we want.

In fact, a small change in the weights or bias of any single perceptron in the network can sometimes cause the output of that perceptron to completely flip, say from $0$ to $1$.

We'll depict sigmoid neurons in the same way we depicted perceptrons: Just like a perceptron, the sigmoid neuron has inputs, $x_1, x_2, \ldots$.

Instead, it's $\sigma(w \cdot x+b)$, where $\sigma$ is called the sigmoid function* *Incidentally, $\sigma$ is sometimes called the logistic function, and this new class of neurons called logistic neurons.

\tag{3}\end{eqnarray} To put it all a little more explicitly, the output of a sigmoid neuron with inputs $x_1,x_2,\ldots$, weights $w_1,w_2,\ldots$, and bias $b$ is \begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)}.

In fact, there are many similarities between perceptrons and sigmoid neurons, and the algebraic form of the sigmoid function turns out to be more of a technical detail than a true barrier to understanding.

var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))};

var y = d3.scale.linear() .domain([0, 1]) .range([height, 0]);

}) var graph = d3.select('#sigmoid_graph') .append('svg') .attr('width', width + m[1] + m[3]) .attr('height', height + m[0] + m[2]) .append('g') .attr('transform', 'translate(' + m[3] + ',' + m[0] + ')');

var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient('bottom') graph.append('g') .attr('class', 'x axis') .attr('transform', 'translate(0, ' + height + ')') .call(xAxis);

var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient('left') .ticks(5) graph.append('g') .attr('class', 'y axis') .call(yAxis);

graph.append('text') .attr('class', 'x label') .attr('text-anchor', 'end') .attr('x', width/2) .attr('y', height+35) .text('z');

graph.append('text') .attr('x', (width / 2)) .attr('y', -10) .attr('text-anchor', 'middle') .style('font-size', '16px') .text('sigmoid function');

var data = d3.range(sample).map(function(d){ return { x: x1(d), y: s(x1(d))};

var y = d3.scale.linear() .domain([0,1]) .range([height, 0]);

}) var graph = d3.select('#step_graph') .append('svg') .attr('width', width + m[1] + m[3]) .attr('height', height + m[0] + m[2]) .append('g') .attr('transform', 'translate(' + m[3] + ',' + m[0] + ')');

var xAxis = d3.svg.axis() .scale(x) .tickValues(d3.range(-4, 5, 1)) .orient('bottom') graph.append('g') .attr('class', 'x axis') .attr('transform', 'translate(0, ' + height + ')') .call(xAxis);

var yAxis = d3.svg.axis() .scale(y) .tickValues(d3.range(0, 1.01, 0.2)) .orient('left') .ticks(5) graph.append('g') .attr('class', 'y axis') .call(yAxis);

graph.append('text') .attr('class', 'x label') .attr('text-anchor', 'end') .attr('x', width/2) .attr('y', height+35) .text('z');

graph.append('text') .attr('x', (width / 2)) .attr('y', -10) .attr('text-anchor', 'middle') .style('font-size', '16px') .text('step function');

If $\sigma$ had in fact been a step function, then the sigmoid neuron would be a perceptron, since the output would be $1$ or $0$ depending on whether $w\cdot x+b$ was positive or negative* *Actually, when $w \cdot x +b = 0$ the perceptron outputs $0$, while the step function outputs $1$.

The smoothness of $\sigma$ means that small changes $\Delta w_j$ in the weights and $\Delta b$ in the bias will produce a small change $\Delta \mbox{output}$ in the output from the neuron.

In fact, calculus tells us that $\Delta \mbox{output}$ is well approximated by \begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b, \tag{5}\end{eqnarray} where the sum is over all the weights, $w_j$, and $\partial \, \mbox{output} / \partial w_j$ and $\partial \, \mbox{output} /\partial b$ denote partial derivatives of the $\mbox{output}$ with respect to $w_j$ and $b$, respectively.

While the expression above looks complicated, with all the partial derivatives, it's actually saying something very simple (and which is very good news): $\Delta \mbox{output}$ is a linear function of the changes $\Delta w_j$ and $\Delta b$ in the weights and bias.

If it's the shape of $\sigma$ which really matters, and not its exact form, then why use the particular form used for $\sigma$ in Equation (3)\begin{eqnarray} \sigma(z) \equiv \frac{1}{1+e^{-z}} \nonumber\end{eqnarray}$('#margin_778862672352_reveal').click(function() {$('#margin_778862672352').toggle('slow', function() {});});?

In fact, later in the book we will occasionally consider neurons where the output is $f(w \cdot x + b)$ for some other activation function $f(\cdot)$.

The main thing that changes when we use a different activation function is that the particular values for the partial derivatives in Equation (5)\begin{eqnarray} \Delta \mbox{output} \approx \sum_j \frac{\partial \, \mbox{output}}{\partial w_j} \Delta w_j + \frac{\partial \, \mbox{output}}{\partial b} \Delta b \nonumber\end{eqnarray}$('#margin_726336021933_reveal').click(function() {$('#margin_726336021933').toggle('slow', function() {});});

It turns out that when we compute those partial derivatives later, using $\sigma$ will simplify the algebra, simply because exponentials have lovely properties when differentiated.

But in practice we can set up a convention to deal with this, for example, by deciding to interpret any output of at least $0.5$ as indicating a '9', and any output less than $0.5$ as indicating 'not a 9'.

Exercises Sigmoid neurons simulating perceptrons, part I $\mbox{}$ Suppose we take all the weights and biases in a network of perceptrons, and multiply them by a positive constant, $c > 0$.

Show that the behaviour of the network doesn't change.Sigmoid neurons simulating perceptrons, part II $\mbox{}$ Suppose we have the same setup as the last problem - a network of perceptrons.

Suppose the weights and biases are such that $w \cdot x + b \neq 0$ for the input $x$ to any particular perceptron in the network.

Now replace all the perceptrons in the network by sigmoid neurons, and multiply the weights and biases by a positive constant $c > 0$.

Suppose we have the network: As mentioned earlier, the leftmost layer in this network is called the input layer, and the neurons within the layer are called input neurons.

The term 'hidden' perhaps sounds a little mysterious - the first time I heard the term I thought it must have some deep philosophical or mathematical significance - but it really means nothing more than 'not an input or an output'.

For example, the following four-layer network has two hidden layers: Somewhat confusingly, and for historical reasons, such multiple layer networks are sometimes called multilayer perceptrons or MLPs, despite being made up of sigmoid neurons, not perceptrons.

If the image is a $64$ by $64$ greyscale image, then we'd have $4,096 = 64 \times 64$ input neurons, with the intensities scaled appropriately between $0$ and $1$.

The output layer will contain just a single neuron, with output values of less than $0.5$ indicating 'input image is not a 9', and values greater than $0.5$ indicating 'input image is a 9 '.

A trial segmentation gets a high score if the individual digit classifier is confident of its classification in all segments, and a low score if the classifier is having a lot of trouble in one or more segments.

So instead of worrying about segmentation we'll concentrate on developing a neural network which can solve the more interesting and difficult problem, namely, recognizing individual handwritten digits.

As discussed in the next section, our training data for the network will consist of many $28$ by $28$ pixel images of scanned handwritten digits, and so the input layer contains $784 = 28 \times 28$ neurons.

The input pixels are greyscale, with a value of $0.0$ representing white, a value of $1.0$ representing black, and in between values representing gradually darkening shades of grey.

A seemingly natural way of doing that is to use just $4$ output neurons, treating each neuron as taking on a binary value, depending on whether the neuron's output is closer to $0$ or to $1$.

The ultimate justification is empirical: we can try out both network designs, and it turns out that, for this particular problem, the network with $10$ output neurons learns to recognize digits better than the network with $4$ output neurons.

In a similar way, let's suppose for the sake of argument that the second, third, and fourth neurons in the hidden layer detect whether or not the following images are present:

Of course, that's not the only sort of evidence we can use to conclude that the image was a $0$ - we could legitimately get a $0$ in many other ways (say, through translations of the above images, or slight distortions).

Assume that the first $3$ layers of neurons are such that the correct output in the third layer (i.e., the old output layer) has activation at least $0.99$, and incorrect outputs have activation less than $0.01$.

We'll use the MNIST data set, which contains tens of thousands of scanned images of handwritten digits, together with their correct classifications.

To make this a good test of performance, the test data was taken from a different set of 250 people than the original training data (albeit still a group split between Census Bureau employees and high school students).

For example, if a particular training image, $x$, depicts a $6$, then $y(x) = (0, 0, 0, 0, 0, 0, 1, 0, 0, 0)^T$ is the desired output from the network.

We use the term cost function throughout this book, but you should note the other terminology, since it's often used in research papers and other discussions of neural networks.

\tag{6}\end{eqnarray} Here, $w$ denotes the collection of all weights in the network, $b$ all the biases, $n$ is the total number of training inputs, $a$ is the vector of outputs from the network when $x$ is input, and the sum is over all training inputs, $x$.

If we instead use a smooth cost function like the quadratic cost it turns out to be easy to figure out how to make small changes in the weights and biases so as to get an improvement in the cost.

Even given that we want to use a smooth cost function, you may still wonder why we choose the quadratic function used in Equation (6)\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \|

This is a well-posed problem, but it's got a lot of distracting structure as currently posed - the interpretation of $w$ and $b$ as weights and biases, the $\sigma$ function lurking in the background, the choice of network architecture, MNIST, and so on.

And for neural networks we'll often want far more variables - the biggest neural networks have cost functions which depend on billions of weights and biases in an extremely complicated way.

We could do this simulation simply by computing derivatives (and perhaps some second derivatives) of $C$ - those derivatives would tell us everything we need to know about the local 'shape' of the valley, and therefore how our ball should roll.

So rather than get into all the messy details of physics, let's simply ask ourselves: if we were declared God for a day, and could make up our own laws of physics, dictating to the ball how it should roll, what law or laws of motion could we pick that would make it so the ball always rolled to the bottom of the valley?

To make this question more precise, let's think about what happens when we move the ball a small amount $\Delta v_1$ in the $v_1$ direction, and a small amount $\Delta v_2$ in the $v_2$ direction.

Calculus tells us that $C$ changes as follows: \begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2.

To figure out how to make such a choice it helps to define $\Delta v$ to be the vector of changes in $v$, $\Delta v \equiv (\Delta v_1, \Delta v_2)^T$, where $T$ is again the transpose operation, turning row vectors into column vectors.

We denote the gradient vector by $\nabla C$, i.e.: \begin{eqnarray} \nabla C \equiv \left( \frac{\partial C}{\partial v_1}, \frac{\partial C}{\partial v_2} \right)^T.

In fact, it's perfectly fine to think of $\nabla C$ as a single mathematical object - the vector defined above - which happens to be written using two symbols.

With these definitions, the expression (7)\begin{eqnarray} \Delta C \approx \frac{\partial C}{\partial v_1} \Delta v_1 + \frac{\partial C}{\partial v_2} \Delta v_2 \nonumber\end{eqnarray}$('#margin_832985330775_reveal').click(function() {$('#margin_832985330775').toggle('slow', function() {});});

\tag{9}\end{eqnarray} This equation helps explain why $\nabla C$ is called the gradient vector: $\nabla C$ relates changes in $v$ to changes in $C$, just as we'd expect something called a gradient to do.

In particular, suppose we choose \begin{eqnarray} \Delta v = -\eta \nabla C, \tag{10}\end{eqnarray} where $\eta$ is a small, positive parameter (known as the learning rate).

\nabla C \|^2 \geq 0$, this guarantees that $\Delta C \leq 0$, i.e., $C$ will always decrease, never increase, if we change $v$ according to the prescription in (10)\begin{eqnarray} \Delta v = -\eta \nabla C \nonumber\end{eqnarray}$('#margin_39079991636_reveal').click(function() {$('#margin_39079991636').toggle('slow', function() {});});.

to compute a value for $\Delta v$, then move the ball's position $v$ by that amount: \begin{eqnarray} v \rightarrow v' = v -\eta \nabla C.

To make gradient descent work correctly, we need to choose the learning rate $\eta$ to be small enough that Equation (9)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_663076476028_reveal').click(function() {$('#margin_663076476028').toggle('slow', function() {});});

In practical implementations, $\eta$ is often varied so that Equation (9)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_362932327599_reveal').click(function() {$('#margin_362932327599').toggle('slow', function() {});});

Then the change $\Delta C$ in $C$ produced by a small change $\Delta v = (\Delta v_1, \ldots, \Delta v_m)^T$ is \begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v, \tag{12}\end{eqnarray} where the gradient $\nabla C$ is the vector \begin{eqnarray} \nabla C \equiv \left(\frac{\partial C}{\partial v_1}, \ldots, \frac{\partial C}{\partial v_m}\right)^T.

\tag{13}\end{eqnarray} Just as for the two variable case, we can choose \begin{eqnarray} \Delta v = -\eta \nabla C, \tag{14}\end{eqnarray} and we're guaranteed that our (approximate) expression (12)\begin{eqnarray} \Delta C \approx \nabla C \cdot \Delta v \nonumber\end{eqnarray}$('#margin_398945612724_reveal').click(function() {$('#margin_398945612724').toggle('slow', function() {});});

This gives us a way of following the gradient to a minimum, even when $C$ is a function of many variables, by repeatedly applying the update rule \begin{eqnarray} v \rightarrow v' = v-\eta \nabla C.

The rule doesn't always work - several things can go wrong and prevent gradient descent from finding the global minimum of $C$, a point we'll return to explore in later chapters.

But, in practice gradient descent often works extremely well, and in neural networks we'll find that it's a powerful way of minimizing the cost function, and so helping the net learn.

It can be proved that the choice of $\Delta v$ which minimizes $\nabla C \cdot \Delta v$ is $\Delta v = - \eta \nabla C$, where $\eta = \epsilon / \|\nabla C\|$ is determined by the size constraint $\|\Delta v\|

Hint: If you're not already familiar with the Cauchy-Schwarz inequality, you may find it helpful to familiarize yourself with it.

If there are a million such $v_j$ variables then we'd need to compute something like a trillion (i.e., a million squared) second partial derivatives* *Actually, more like half a trillion, since $\partial^2 C/ \partial v_j \partial v_k = \partial^2 C/ \partial v_k \partial v_j$.

The idea is to use gradient descent to find the weights $w_k$ and biases $b_l$ which minimize the cost in Equation (6)\begin{eqnarray} C(w,b) \equiv \frac{1}{2n} \sum_x \|

In other words, our 'position' now has components $w_k$ and $b_l$, and the gradient vector $\nabla C$ has corresponding components $\partial C / \partial w_k$ and $\partial C / \partial b_l$.

Writing out the gradient descent update rule in terms of components, we have \begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\eta \frac{\partial C}{\partial w_k} \tag{16}\\ b_l & \rightarrow & b_l' = b_l-\eta \frac{\partial C}{\partial b_l}.

In practice, to compute the gradient $\nabla C$ we need to compute the gradients $\nabla C_x$ separately for each training input, $x$, and then average them, $\nabla C = \frac{1}{n} \sum_x \nabla C_x$.

To make these ideas more precise, stochastic gradient descent works by randomly picking out a small number $m$ of randomly chosen training inputs.

Provided the sample size $m$ is large enough we expect that the average value of the $\nabla C_{X_j}$ will be roughly equal to the average over all $\nabla C_x$, that is, \begin{eqnarray} \frac{\sum_{j=1}^m \nabla C_{X_{j}}}{m} \approx \frac{\sum_x \nabla C_x}{n} = \nabla C, \tag{18}\end{eqnarray} where the second sum is over the entire set of training data.

Swapping sides we get \begin{eqnarray} \nabla C \approx \frac{1}{m} \sum_{j=1}^m \nabla C_{X_{j}}, \tag{19}\end{eqnarray} confirming that we can estimate the overall gradient by computing gradients just for the randomly chosen mini-batch.

Then stochastic gradient descent works by picking out a randomly chosen mini-batch of training inputs, and training with those, \begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \tag{20}\\ b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l}, \tag{21}\end{eqnarray} where the sums are over all the training examples $X_j$ in the current mini-batch.

And, in a similar way, the mini-batch update rules (20)\begin{eqnarray} w_k & \rightarrow & w_k' = w_k-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial w_k} \nonumber\end{eqnarray}$('#margin_255037324417_reveal').click(function() {$('#margin_255037324417').toggle('slow', function() {});});

and (21)\begin{eqnarray} b_l & \rightarrow & b_l' = b_l-\frac{\eta}{m} \sum_j \frac{\partial C_{X_j}}{\partial b_l} \nonumber\end{eqnarray}$('#margin_141169455106_reveal').click(function() {$('#margin_141169455106').toggle('slow', function() {});});

We can think of stochastic gradient descent as being like political polling: it's much easier to sample a small mini-batch than it is to apply gradient descent to the full batch, just as carrying out a poll is easier than running a full election.

For example, if we have a training set of size $n = 60,000$, as in MNIST, and choose a mini-batch size of (say) $m = 10$, this means we'll get a factor of $6,000$ speedup in estimating the gradient!

Of course, the estimate won't be perfect - there will be statistical fluctuations - but it doesn't need to be perfect: all we really care about is moving in a general direction that will help decrease $C$, and that means we don't need an exact computation of the gradient.

In practice, stochastic gradient descent is a commonly used and powerful technique for learning in neural networks, and it's the basis for most of the learning techniques we'll develop in this book.

That is, given a training input, $x$, we update our weights and biases according to the rules $w_k \rightarrow w_k' = w_k - \eta \partial C_x / \partial w_k$ and $b_l \rightarrow b_l' = b_l - \eta \partial C_x / \partial b_l$.

Name one advantage and one disadvantage of online learning, compared to stochastic gradient descent with a mini-batch size of, say, $20$.

In neural networks the cost $C$ is, of course, a function of many variables - all the weights and biases - and so in some sense defines a surface in a very high-dimensional space.

I won't go into more detail here, but if you're interested then you may enjoy reading this discussion of some of the techniques professional mathematicians use to think in high dimensions.

We'll leave the test images as is, but split the 60,000-image MNIST training set into two parts: a set of 50,000 images, which we'll use to train our neural network, and a separate 10,000 image validation set.

We won't use the validation data in this chapter, but later in the book we'll find it useful in figuring out how to set certain hyper-parameters of the neural network - things like the learning rate, and so on, which aren't directly selected by our learning algorithm.

When I refer to the 'MNIST training data' from now on, I'll be referring to our 50,000 image data set, not the original 60,000 image data set* *As noted earlier, the MNIST data set is based on two data sets collected by NIST, the United States' National Institute of Standards and Technology.

for x, y in zip(sizes[:-1], sizes[1:])]

So, for example, if we want to create a Network object with 2 neurons in the first layer, 3 neurons in the second layer, and 1 neuron in the final layer, we'd do this with the code: net = Network([2, 3, 1])

The biases and weights in the Network object are all initialized randomly, using the Numpy np.random.randn function to generate Gaussian distributions with mean $0$ and standard deviation $1$.

Note that the Network initialization code assumes that the first layer of neurons is an input layer, and omits to set any biases for those neurons, since biases are only ever used in computing the outputs from later layers.

The big advantage of using this ordering is that it means that the vector of activations of the third layer of neurons is: \begin{eqnarray} a' = \sigma(w a + b).

(This is called vectorizing the function $\sigma$.) It's easy to verify that Equation (22)\begin{eqnarray} a' = \sigma(w a + b) \nonumber\end{eqnarray}$('#margin_552886241220_reveal').click(function() {$('#margin_552886241220').toggle('slow', function() {});});

gives the same result as our earlier rule, Equation (4)\begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)} \nonumber\end{eqnarray}$('#margin_7421600236_reveal').click(function() {$('#margin_7421600236').toggle('slow', function() {});});, for computing the output of a sigmoid neuron.

in component form, and verify that it gives the same result as the rule (4)\begin{eqnarray} \frac{1}{1+\exp(-\sum_j w_j x_j-b)} \nonumber\end{eqnarray}$('#margin_347257101140_reveal').click(function() {$('#margin_347257101140').toggle('slow', function() {});});

We then add a feedforward method to the Network class, which, given an input a for the network, returns the corresponding output* *It is assumed that the input a is an (n, 1) Numpy ndarray, not a (n,) vector.

Although using an (n,) vector appears the more natural choice, using an (n, 1) ndarray makes it particularly easy to modify the code to feedforward multiple inputs at once, and that is sometimes convenient.

All the method does is applies Equation (22)\begin{eqnarray} a' = \sigma(w a + b) \nonumber\end{eqnarray}$('#margin_335258165235_reveal').click(function() {$('#margin_335258165235').toggle('slow', function() {});});

training_data[k:k+mini_batch_size]

for k in xrange(0, n, mini_batch_size)]

self.update_mini_batch(mini_batch, eta)

print "Epoch {0}: {1} / {2}".format(

j, self.evaluate(test_data), n_test)

print "Epoch {0} complete".format(j)

This is done by the code self.update_mini_batch(mini_batch, eta), which updates the network weights and biases according to a single iteration of gradient descent, using just the training data in mini_batch.

delta_nabla_b, delta_nabla_w = self.backprop(x, y)

nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]

nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]

for w, nw in zip(self.weights, nabla_w)]

for b, nb in zip(self.biases, nabla_b)]

Most of the work is done by the line delta_nabla_b, delta_nabla_w = self.backprop(x, y)

The self.backprop method makes use of a few extra functions to help in computing the gradient, namely sigmoid_prime, which computes the derivative of the $\sigma$ function, and self.cost_derivative, which I won't describe here.

for x, y in zip(sizes[:-1], sizes[1:])]

training_data[k:k+mini_batch_size]

for k in xrange(0, n, mini_batch_size)]

self.update_mini_batch(mini_batch, eta)

print "Epoch {0}: {1} / {2}".format(

j, self.evaluate(test_data), n_test)

print "Epoch {0} complete".format(j)

delta_nabla_b, delta_nabla_w = self.backprop(x, y)

nabla_b = [nb+dnb for nb, dnb in zip(nabla_b, delta_nabla_b)]

nabla_w = [nw+dnw for nw, dnw in zip(nabla_w, delta_nabla_w)]

for w, nw in zip(self.weights, nabla_w)]

for b, nb in zip(self.biases, nabla_b)]

activations = [x] # list to store all the activations, layer by layer

zs = [] # list to store all the z vectors, layer by layer

# l = 1 means the last layer of neurons, l = 2 is the

delta = np.dot(self.weights[-l+1].transpose(), delta) * sp

for (x, y) in test_data]

Finally, we'll use stochastic gradient descent to learn from the MNIST training_data over 30 epochs, with a mini-batch size of 10, and a learning rate of $\eta = 3.0$, >>>

As was the case earlier, if you're running the code as you read along, you should be warned that it takes quite a while to execute (on my machine this experiment takes tens of seconds for each training epoch), so it's wise to continue reading in parallel while the code executes.

At least in this case, using more hidden neurons helps us get better results* *Reader feedback indicates quite some variation in results for this experiment, and some training runs give results quite a bit worse.

Using the techniques introduced in chapter 3 will greatly reduce the variation in performance across different training runs for our networks..

(If making a change improves things, try doing more!) If we do that several times over, we'll end up with a learning rate of something like $\eta = 1.0$ (and perhaps fine tune to $3.0$), which is close to our earlier experiments.

Exercise Try creating a network with just two layers - an input and an output layer, no hidden layer - with 784 and 10 neurons, respectively.

The data structures used to store the MNIST data are described in the documentation strings - it's straightforward stuff, tuples and lists of Numpy ndarray objects (think of them as vectors if you're not familiar with ndarrays): """mnist_loader~~~~~~~~~~~~A library to load the MNIST image data.

In some sense, the moral of both our results and those in more sophisticated papers, is that for some problems: sophisticated algorithm $\leq$ simple learning algorithm + good training data.

We could attack this problem the same way we attacked handwriting recognition - by using the pixels in the image as input to a neural network, with the output from the network a single neuron indicating either 'Yes, it's a face' or 'No, it's not a face'.

The end result is a network which breaks down a very complicated question - does this image show a face or not - into very simple questions answerable at the level of single pixels.

It does this through a series of many layers, with early layers answering very simple and specific questions about the input image, and later layers building up a hierarchy of ever more complex and abstract concepts.

Comparing a deep network to a shallow network is a bit like comparing a programming language with the ability to make function calls to a stripped down language with no ability to make such calls.

One of our core aspirations at OpenAI is to develop algorithms and techniques that endow computers with an understanding of our world.

To train a generative model we first collect a large amount of data in some domain (e.g., think millions of images, sentences, or sounds, etc.) and then train a model to generate data like it.

The intuition behind this approach follows a famous quote from Richard Feynman: “What I cannot create, I do not understand.” The trick is that the neural networks we use as generative models have a number of parameters significantly smaller than the amount of data we train them on, so the models are forced to discover and efficiently internalize the essence of the data in order to generate it.

Suppose we have some large collection of images, such as the 1.2 million images in the ImageNet dataset (but keep in mind that this could eventually be a large collection of images or videos from the internet or robots).

This network takes as input 100 random numbers drawn from a uniform distribution (we refer to these as a code, or latent variables, in red) and outputs an image (in this case 64x64x3 images on the right, in green).

But in addition to that — and here's the trick — we can also backpropagate through both the discriminator and the generator to find how we should change the generator's parameters to make its 200 samples slightly more confusing for the discriminator.

These two networks are therefore locked in a battle: the discriminator is trying to distinguish real images from fake images and the generator is trying to create images that make the discriminator think they are real.

both cases the samples from the generator start out noisy and chaotic, and over time converge to have more plausible image statistics: This is exciting — these neural networks are learning what the visual world looks like!

Eventually, the model may discover many more complex regularities: that there are certain types of backgrounds, objects, textures, that they occur in certain likely arrangements, or that they transform in certain ways over time in videos, etc.

In the example image below, the blue region shows the part of the image space that, with a high probability (over some threshold) contains real images, and black dots indicate our data points (each is one image in our dataset).

Now, our model also describes a distribution \hat{p}_{\theta}(x) (green) that is defined implicitly by taking points from a unit Gaussian distribution (red) and mapping them through a (deterministic) neural network — our generative model (yellow).

Therefore, you can imagine the green distribution starting out random and then the training process iteratively changing the parameters \theta to stretch and squeeze it to better match the blue distribution.

First, as mentioned above GANs are a very promising family of generative models because, unlike other methods, they produce very clean and sharp images and learn codes that contain valuable information about these textures.

These techniques allow us to scale up GANs and obtain nice 128x128 ImageNet samples: Our CIFAR-10 samples also look very sharp - Amazon Mechanical Turk workers can distinguish our samples from real data with an error rate of 21.3% (50% would be random guessing): In addition to generating pretty pictures, we introduce an approach for semi-supervised learning with GANs that involves the discriminator producing an additional output indicating the label of the input.

On MNIST, for example, we achieve 99.14% accuracy with only 10 labeled examples per class with a fully connected neural network — a result that’s very close to the best known results with fully supervised approaches using all 60,000 labeled examples.

The core contribution of this work, termed inverse autoregressive flow (IAF), is a new approach that, unlike previous work, allows us to parallelize the computation of rich approximate posteriors, and make them almost arbitrarily flexible.

A regular GAN achieves the objective of reproducing the data distribution in the model, but the layout and organization of the code space is underspecified — there are many possible solutions to mapping the unit Gaussian to images and the one we end up with might be intricate and highly entangled.

It's clear from the five provided examples (along each row) that the resulting dimensions in the code capture interpretable dimensions, and that the model has perhaps understood that there are camera angles, facial variations, etc., without having been told that these features exist and are important: The next two recent projects are in a reinforcement learning (RL) setting (another area of focus at OpenAI), but they both involve a generative model component.

Additional presently known applications include image denoising, inpainting, super-resolution, structured prediction, exploration in reinforcement learning, and neural network pretraining in cases where labeled data is expensive.

Deep learning

In the last chapter we learned that deep neural networks are often much harder to train than shallow neural networks.

We'll also look at the broader picture, briefly reviewing recent progress on using deep nets for image recognition, speech recognition, and other applications.

We'll work through a detailed example - code and all - of using convolutional nets to solve the problem of classifying handwritten digits from the MNIST data set:

As we go we'll explore many powerful techniques: convolutions, pooling, the use of GPUs to do far more training than we did with our shallow networks, the algorithmic expansion of our training data (to reduce overfitting), the use of the dropout technique (also to reduce overfitting), the use of ensembles of networks, and others.

We conclude our discussion of image recognition with a survey of some of the spectacular recent progress using networks (particularly convolutional nets) to do image recognition.

We'll briefly survey other models of neural networks, such as recurrent neural nets and long short-term memory units, and how such models can be applied to problems in speech recognition, natural language processing, and other areas.

And we'll speculate about the future of neural networks and deep learning, ranging from ideas like intention-driven user interfaces, to the role of deep learning in artificial intelligence.

For the $28 \times 28$ pixel images we've been using, this means our network has $784$ ($= 28 \times 28$) input neurons.

Our earlier networks work pretty well: we've obtained a classification accuracy better than 98 percent, using training and test data from the MNIST handwritten digit data set.

But the seminal paper establishing the modern subject of convolutional networks was a 1998 paper, 'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner.

LeCun has since made an interesting remark on the terminology for convolutional nets: 'The [biological] neural inspiration in models like convolutional nets is very tenuous.

That's why I call them 'convolutional nets' not 'convolutional neural nets', and why we call the nodes 'units' and not 'neurons' '.

Despite this remark, convolutional nets use many of the same ideas as the neural networks we've studied up to now: ideas such as backpropagation, gradient descent, regularization, non-linear activation functions, and so on.

In a convolutional net, it'll help to think instead of the inputs as a $28 \times 28$ square of neurons, whose values correspond to the $28 \times 28$ pixel intensities we're using as inputs:

To be more precise, each neuron in the first hidden layer will be connected to a small region of the input neurons, say, for example, a $5 \times 5$ region, corresponding to $25$ input pixels.

So, for a particular hidden neuron, we might have connections that look like this: That region in the input image is called the local receptive field for the hidden neuron.

To illustrate this concretely, let's start with a local receptive field in the top-left corner: Then we slide the local receptive field over by one pixel to the right (i.e., by one neuron), to connect to a second hidden neuron:

Note that if we have a $28 \times 28$ input image, and $5 \times 5$ local receptive fields, then there will be $24 \times 24$ neurons in the hidden layer.

This is because we can only move the local receptive field $23$ neurons across (or $23$ neurons down), before colliding with the right-hand side (or bottom) of the input image.

In this chapter we'll mostly stick with stride length $1$, but it's worth knowing that people sometimes experiment with different stride lengths* *As was done in earlier chapters, if we're interested in trying different stride lengths then we can use validation data to pick out the stride length which gives the best performance.

The same approach may also be used to choose the size of the local receptive field - there is, of course, nothing special about using a $5 \times 5$ local receptive field.

In general, larger local receptive fields tend to be helpful when the input images are significantly larger than the $28 \times 28$ pixel MNIST images..

In other words, for the $j, k$th hidden neuron, the output is: \begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right).

Informally, think of the feature detected by a hidden neuron as the kind of input pattern that will cause the neuron to activate: it might be an edge in the image, for instance, or maybe some other type of shape.

To see why this makes sense, suppose the weights and bias are such that the hidden neuron can pick out, say, a vertical edge in a particular local receptive field.

To put it in slightly more abstract terms, convolutional networks are well adapted to the translation invariance of images: move a picture of a cat (say) a little ways, and it's still an image of a cat* *In fact, for the MNIST digit classification problem we've been studying, the images are centered and size-normalized.

One of the early convolutional networks, LeNet-5, used $6$ feature maps, each associated to a $5 \times 5$ local receptive field, to recognize MNIST digits.

Let's take a quick peek at some of the features which are learned* *The feature maps illustrated come from the final convolutional network we train, see here.:

Each map is represented as a $5 \times 5$ block image, corresponding to the $5 \times 5$ weights in the local receptive field.

By comparison, suppose we had a fully connected first layer, with $784 = 28 \times 28$ input neurons, and a relatively modest $30$ hidden neurons, as we used in many of the examples earlier in the book.

That, in turn, will result in faster training for the convolutional model, and, ultimately, will help us build deep networks using convolutional layers.

Incidentally, the name convolutional comes from the fact that the operation in Equation (125)\begin{eqnarray} \sigma\left(b + \sum_{l=0}^4 \sum_{m=0}^4 w_{l,m} a_{j+l, k+m} \right) \nonumber\end{eqnarray}$('#margin_223550267310_reveal').click(function() {$('#margin_223550267310').toggle('slow', function() {});});

A little more precisely, people sometimes write that equation as $a^1 = \sigma(b + w * a^0)$, where $a^1$ denotes the set of output activations from one feature map, $a^0$ is the set of input activations, and $*$ is called a convolution operation.

In particular, I'm using 'feature map' to mean not the function computed by the convolutional layer, but rather the activation of the hidden neurons output from the layer.

In max-pooling, a pooling unit simply outputs the maximum activation in the $2 \times 2$ input region, as illustrated in the following diagram:

Note that since we have $24 \times 24$ neurons output from the convolutional layer, after pooling we have $12 \times 12$ neurons.

So if there were three feature maps, the combined convolutional and max-pooling layers would look like:

Here, instead of taking the maximum activation of a $2 \times 2$ region of neurons, we take the square root of the sum of the squares of the activations in the $2 \times 2$ region.

It's similar to the architecture we were just looking at, but has the addition of a layer of $10$ output neurons, corresponding to the $10$ possible values for MNIST digits ('0', '1', '2', etc):

Problem Backpropagation in a convolutional network The core equations of backpropagation in a network with fully-connected layers are (BP1)\begin{eqnarray} \delta^L_j = \frac{\partial C}{\partial a^L_j} \sigma'(z^L_j) \nonumber\end{eqnarray}$('#margin_511945174620_reveal').click(function() {$('#margin_511945174620').toggle('slow', function() {});});-(BP4)\begin{eqnarray} \frac{\partial C}{\partial w^l_{jk}} = a^{l-1}_k \delta^l_j \nonumber\end{eqnarray}$('#margin_896578903066_reveal').click(function() {$('#margin_896578903066').toggle('slow', function() {});});

Suppose we have a network containing a convolutional layer, a max-pooling layer, and a fully-connected output layer, as in the network discussed above.

The program we'll use to do this is called network3.py, and it's an improved version of the programs network.py and network2.py developed in earlier chapters* *Note also that network3.py incorporates ideas from the Theano library's documentation on convolutional neural nets (notably the implementation of LeNet-5), from Misha Denil's implementation of dropout, and from Chris Olah..

But now that we understand those details, for network3.py we're going to use a machine learning library known as Theano* *See Theano: A CPU and GPU Math Expression Compiler in Python, by James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Ravzan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio (2010).

The examples which follow were run using Theano 0.6* *As I release this chapter, the current version of Theano has changed to version 0.7.

Note that the code in the script simply duplicates and parallels the discussion in this section.Note also that throughout the section I've explicitly specified the number of training epochs.

In practice, it's worth using early stopping, that is, tracking accuracy on the validation set, and stopping training when we are confident the validation accuracy has stopped improving.: >>>

Using the validation data to decide when to evaluate the test accuracy helps avoid overfitting to the test data (see this earlier discussion of the use of validation data).

Your results may vary slightly, since the network's weights and biases are randomly initialized* *In fact, in this experiment I actually did three separate runs training a network with this architecture.

This $97.80$ percent accuracy is close to the $98.04$ percent accuracy obtained back in Chapter 3, using a similar network architecture and learning hyper-parameters.

Second, while the final layer in the earlier network used sigmoid activations and the cross-entropy cost function, the current network uses a softmax final layer, and the log-likelihood cost function.

I haven't made this switch for any particularly deep reason - mostly, I've done it because softmax plus log-likelihood cost is more common in modern image classification networks.

In this architecture, we can think of the convolutional and pooling layers as learning about local spatial structure in the input training image, while the later, fully-connected layer learns at a more abstract level, integrating global information from across the entire image.

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2)),

validation_data, test_data)

Can we improve on the $98.78$ percent classification accuracy?

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2)),

filter_shape=(40, 20, 5, 5),

poolsize=(2, 2)),

validation_data, test_data)

In fact, you can think of the second convolutional-pooling layer as having as input $12 \times 12$ 'images', whose 'pixels' represent the presence (or absence) of particular localized features in the original input image.

The output from the previous layer involves $20$ separate feature maps, and so there are $20 \times 12 \times 12$ inputs to the second convolutional-pooling layer.

In fact, we'll allow each neuron in this layer to learn from all $20 \times 5 \times 5$ input neurons in its local receptive field.

More informally: the feature detectors in the second convolutional-pooling layer have access to all the features from the previous layer, but only within their particular local receptive field* *This issue would have arisen in the first layer if the input images were in color.

In that case we'd have 3 input features for each pixel, corresponding to red, green and blue channels in the input image.

So we'd allow the feature detectors to have access to all color information, but only within a given local receptive field..

Problem Using the tanh activation function Several times earlier in the book I've mentioned arguments that the tanh function may be a better activation function than the sigmoid function.

Try training the network with tanh activations in the convolutional and fully-connected layers* *Note that you can pass activation_fn=tanh as a parameter to the ConvPoolLayer and FullyConnectedLayer classes..

Try plotting the per-epoch validation accuracies for both tanh- and sigmoid-based networks, all the way out to $60$ epochs.

If your results are similar to mine, you'll find the tanh networks train a little faster, but the final accuracies are very similar.

Can you get a similar training speed with the sigmoid, perhaps by changing the learning rate, or doing some rescaling* *You may perhaps find inspiration in recalling that $\sigma(z) = (1+\tanh(z/2))/2$.?

Try a half-dozen iterations on the learning hyper-parameters or network architecture, searching for ways that tanh may be superior to the sigmoid.

Personally, I did not find much advantage in switching to tanh, although I haven't experimented exhaustively, and perhaps you may find a way.

In any case, in a moment we will find an advantage in switching to the rectified linear activation function, and so we won't go any deeper into the use of tanh.

Using rectified linear units: The network we've developed at this point is actually a variant of one of the networks used in the seminal 1998 paper* *'Gradient-based learning applied to document recognition', by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner (1998).

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

filter_shape=(40, 20, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

However, across all my experiments I found that networks based on rectified linear units consistently outperformed networks based on sigmoid activation functions.

The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments* *A common justification is that $\max(0, z)$ doesn't saturate in the limit of large $z$, unlike sigmoid neurons, and this helps rectified linear units continue learning.

A simple way of expanding the training data is to displace each training image by a single pixel, either up one pixel, down one pixel, left one pixel, or right one pixel.

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

filter_shape=(40, 20, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

Just to remind you of the flavour of some of the results in that earlier discussion: in 2003 Simard, Steinkraus and Platt* *Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis, by Patrice Simard, Dave Steinkraus, and John Platt (2003).

improved their MNIST performance to $99.6$ percent using a neural network otherwise very similar to ours, using two convolutional-pooling layers, followed by a hidden fully-connected layer with $100$ neurons.

There were a few differences of detail in their architecture - they didn't have the advantage of using rectified linear units, for instance - but the key to their improved performance was expanding the training data.

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

filter_shape=(40, 20, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

filter_shape=(20, 1, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

filter_shape=(40, 20, 5, 5),

poolsize=(2, 2),

activation_fn=ReLU),

Using this, we obtain an accuracy of $99.60$ percent, which is a substantial improvement over our earlier results, especially our main benchmark, the network with $100$ hidden neurons, where we achieved $99.37$ percent.

In fact, I tried experiments with both $300$ and $1,000$ hidden neurons, and obtained (very slightly) better validation performance with $1,000$ hidden neurons.

Why we only applied dropout to the fully-connected layers: If you look carefully at the code above, you'll notice that we applied dropout only to the fully-connected section of the network, not to the convolutional layers.

But apart from that, they used few other tricks, including no convolutional layers: it was a plain, vanilla network, of the kind that, with enough patience, could have been trained in the 1980s (if the MNIST data set had existed), given enough computing power.

In particular, we saw that the gradient tends to be quite unstable: as we move from the output layer to earlier layers the gradient tends to either vanish (the vanishing gradient problem) or explode (the exploding gradient problem).

In particular, in our final experiments we trained for $40$ epochs using a data set $5$ times larger than the raw MNIST training data.

I've occasionally heard people adopt a deeper-than-thou attitude, holding that if you're not keeping-up-with-the-Joneses in terms of number of hidden layers, then you're not really doing deep learning.

To speed that process up you may find it helpful to revisit Chapter 3's discussion of how to choose a neural network's hyper-parameters, and perhaps also to look at some of the further reading suggested in that section.

Here's the code (discussion below)* *Note added November 2016: several readers have noted that in the line initializing self.w, I set scale=np.sqrt(1.0/n_out), when the arguments of Chapter 3 suggest a better initialization may be scale=np.sqrt(1.0/n_in).

np.random.normal(

loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)),

dtype=theano.config.floatX),

dtype=theano.config.floatX),

I use the name inpt rather than input because input is a built-in function in Python, and messing with built-ins tends to cause unpredictable behavior and difficult-to-diagnose bugs.

So self.inpt_dropout and self.output_dropout are used during training, while self.inpt and self.output are used for all other purposes, e.g., evaluating accuracy on the validation and test data.

prev_layer, layer = self.layers[j-1], self.layers[j]

prev_layer.output, prev_layer.output_dropout, self.mini_batch_size)

Now, this isn't a Theano tutorial, and so we won't get too deeply into what it means that these are symbolic variables* *The Theano documentation provides a good introduction to Theano.

# define the (regularized) cost function, symbolic gradients, and updates

0.5*lmbda*l2_norm_squared/num_training_batches

for param, grad in zip(self.params, grads)]

self.x:

training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

iteration = num_training_batches*epoch+minibatch_index

if iteration

print("Training mini-batch number {0}".format(iteration))

cost_ij = train_mb(minibatch_index)

if (iteration+1)

validation_accuracy = np.mean(

[validate_mb_accuracy(j) for j in xrange(num_validation_batches)])

print("Epoch {0}: validation accuracy {1:.2

epoch, validation_accuracy))

if validation_accuracy >= best_validation_accuracy:

print("This is the best validation accuracy to date.")

best_validation_accuracy = validation_accuracy

best_iteration = iteration

if test_data:

test_accuracy = np.mean(

[test_mb_accuracy(j) for j in xrange(num_test_batches)])

print('The corresponding test accuracy is {0:.2

test_accuracy))

# define the (regularized) cost function, symbolic gradients, and updates

0.5*lmbda*l2_norm_squared/num_training_batches

for param, grad in zip(self.params, grads)]

In these lines we symbolically set up the regularized log-likelihood cost function, compute the corresponding derivatives in the gradient function, as well as the corresponding parameter updates.

With all these things defined, the stage is set to define the train_mb function, a Theano symbolic function which uses the updates to update the Network parameters, given a mini-batch index.

The remainder of the SGD method is self-explanatory - we simply iterate over the epochs, repeatedly training the network on mini-batches of training data, and computing the validation and test accuracies.

prev_layer, layer = self.layers[j-1], self.layers[j]

prev_layer.output, prev_layer.output_dropout, self.mini_batch_size)

# define the (regularized) cost function, symbolic gradients, and updates

0.5*lmbda*l2_norm_squared/num_training_batches

for param, grad in zip(self.params, grads)]

self.x:

training_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

training_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

validation_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

validation_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size],

self.y:

test_y[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

self.x:

test_x[i*self.mini_batch_size: (i+1)*self.mini_batch_size]

iteration = num_training_batches*epoch+minibatch_index

if iteration % 1000 == 0:

print("Training mini-batch number {0}".format(iteration))

cost_ij = train_mb(minibatch_index)

if (iteration+1) % num_training_batches == 0:

validation_accuracy = np.mean(

[validate_mb_accuracy(j) for j in xrange(num_validation_batches)])

print("Epoch {0}: validation accuracy {1:.2%}".format(

epoch, validation_accuracy))

if validation_accuracy >= best_validation_accuracy:

print("This is the best validation accuracy to date.")

best_validation_accuracy = validation_accuracy

best_iteration = iteration

if test_data:

test_accuracy = np.mean(

[test_mb_accuracy(j) for j in xrange(num_test_batches)])

print('The corresponding test accuracy is {0:.2%}'.format(

test_accuracy))

activation_fn=sigmoid):

of filters, the number of input feature maps, the filter height, and the

`poolsize` is a tuple of length 2, whose entries are the y and

np.random.normal(loc=0, scale=np.sqrt(1.0/n_out), size=filter_shape),

dtype=theano.config.floatX),

np.random.normal(loc=0, scale=1.0, size=(filter_shape[0],)),

dtype=theano.config.floatX),

pooled_out + self.b.dimshuffle('x', 0, 'x', 'x'))

np.random.normal(

loc=0.0, scale=np.sqrt(1.0/n_out), size=(n_in, n_out)),

dtype=theano.config.floatX),

dtype=theano.config.floatX),

Earlier in the book we discussed an automated way of selecting the number of epochs to train for, known as early stopping.

Hint: After working on this problem for a while, you may find it useful to see the discussion at this link.

Earlier in the chapter I described a technique for expanding the training data by applying (small) rotations, skewing, and translation.

Note: Unless you have a tremendous amount of memory, it is not practical to explicitly generate the entire expanded data set.

Show that rescaling all the weights in the network by a constant factor $c > 0$ simply rescales the outputs by a factor $c^{L-1}$, where $L$ is the number of layers.

Still, considering the problem will help you better understand networks containing rectified linear units.

Note: The word good in the second part of this makes the problem a research problem.

In 1998, the year MNIST was introduced, it took weeks to train a state-of-the-art workstation to achieve accuracies substantially worse than those we can achieve using a GPU and less than an hour of training.

With that said, the past few years have seen extraordinary improvements using deep nets to attack extremely difficult image recognition tasks.

They will identify the years 2011 to 2015 (and probably a few years beyond) as a time of huge breakthroughs, driven by deep convolutional nets.

The 2012 LRMD paper: Let me start with a 2012 paper* *Building high-level features using large scale unsupervised learning, by Quoc Le, Marc'Aurelio Ranzato, Rajat Monga, Matthieu Devin, Kai Chen, Greg Corrado, Jeff Dean, and Andrew Ng (2012).

Note that the detailed architecture of the network used in the paper differed in many details from the deep convolutional networks we've been studying.

Details about ImageNet are available in the original ImageNet paper, ImageNet: a large-scale hierarchical image database, by Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei (2009).:

If you're looking for a challenge, I encourage you to visit ImageNet's list of hand tools, which distinguishes between beading planes, block planes, chamfer planes, and about a dozen other types of plane, amongst other categories.

The 2012 KSH paper: The work of LRMD was followed by a 2012 paper of Krizhevsky, Sutskever and Hinton (KSH)* *ImageNet classification with deep convolutional neural networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey E.

By this top-$5$ criterion, KSH's deep convolutional network achieved an accuracy of $84.7$ percent, vastly better than the next-best contest entry, which achieved an accuracy of $73.8$ percent.

The input layer contains $3 \times 224 \times 224$ neurons, representing the RGB values for a $224 \times 224$ image.

The feature maps are split into two groups of $48$ each, with the first $48$ feature maps residing on one GPU, and the second $48$ feature maps residing on the other GPU.

Their respectives parameters are: (3) $384$ feature maps, with $3 \times 3$ local receptive fields, and $256$ input channels;

A Theano-based implementation has also been developed* *Theano-based large-scale visual recognition with multiple GPUs, by Weiguang Ding, Ruoyan Wang, Fei Mao, and Graham Taylor (2014)., with the code available here.

As in 2012, it involved a training set of $1.2$ million images, in $1,000$ categories, and the figure of merit was whether the top $5$ predictions included the correct category.

The winning team, based primarily at Google* *Going deeper with convolutions, by Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich (2014)., used a deep convolutional network with $22$ layers of neurons.

GoogLeNet achieved a top-5 accuracy of $93.33$ percent, a giant improvement over the 2013 winner (Clarifai, with $88.3$ percent), and the 2012 winner (KSH, with $84.7$ percent).

In 2014 a team of researchers wrote a survey paper about the ILSVRC competition* *ImageNet large scale visual recognition challenge, by Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C.

...the task of labeling images with 5 out of 1000 categories quickly turned out to be extremely challenging, even for some friends in the lab who have been working on ILSVRC and its classes for a while.

In the end I realized that to get anywhere competitively close to GoogLeNet, it was most efficient if I sat down and went through the painfully long training process and the subsequent careful annotation process myself...

Some images are easily recognized, while some images (such as those of fine-grained breeds of dogs, birds, or monkeys) can require multiple minutes of concentrated effort.

In other words, an expert human, working painstakingly, was with great effort able to narrowly beat the deep neural network.

In fact, Karpathy reports that a second human expert, trained on a smaller sample of images, was only able to attain a $12.0$ percent top-5 error rate, significantly below GoogLeNet's performance.

One encouraging practical set of results comes from a team at Google, who applied deep convolutional networks to the problem of recognizing street numbers in Google's Street View imagery* *Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks, by Ian J.

And they go on to make the broader claim: 'We believe with this model we have solved [optical character recognition] for short sequences [of characters] for many applications.'

For instance, a 2013 paper* *Intriguing properties of neural networks, by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus (2013) showed that deep networks may suffer from what are effectively blind spots.

The existence of the adversarial negatives appears to be in contradiction with the network’s ability to achieve high generalization performance.

The explanation is that the set of adversarial negatives is of extremely low probability, and thus is never (or rarely) observed in the test set, yet it is dense (much like the rational numbers), and so it is found near virtually every test case.

For example, one recent paper* *Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images, by Anh Nguyen, Jason Yosinski, and Jeff Clune (2014).

shows that given a trained network it's possible to generate images which look to a human like white noise, but which the network classifies as being in a known category with a very high degree of confidence.

If you read the neural networks literature, you'll run into many ideas we haven't discussed: recurrent neural networks, Boltzmann machines, generative models, transfer learning, reinforcement learning, and so on, on and on $\ldots$ and on!

One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages.

A 2014 paper developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output.

For example, an approach based on deep nets has achieved outstanding results on large vocabulary continuous speech recognition.

And another system based on deep nets has been deployed in Google's Android operating system (for related technical work, see Vincent Vanhoucke's 2012-2015 papers).

Many other ideas used in feedforward nets, ranging from regularization techniques to convolutions to the activation and cost functions used, are also useful in recurrent nets.

Deep belief nets, generative models, and Boltzmann machines: Modern interest in deep learning began in 2006, with papers explaining how to train a type of neural network known as a deep belief network (DBN)* *See A fast learning algorithm for deep belief nets, by Geoffrey Hinton, Simon Osindero, and Yee-Whye Teh (2006), as well as the related work in Reducing the dimensionality of data with neural networks, by Geoffrey Hinton and Ruslan Salakhutdinov (2006)..

A generative model like a DBN can be used in a similar way, but it's also possible to specify the values of some of the feature neurons and then 'run the network backward', generating values for the input activations.

And the ability to do unsupervised learning is extremely interesting both for fundamental scientific reasons, and - if it can be made to work well enough - for practical applications.

Active areas of research include using neural networks to do natural language processing (see also this informative review paper), machine translation, as well as perhaps more surprising applications such as music informatics.

In many cases, having read this book you should be able to begin following recent work, although (of course) you'll need to fill in gaps in presumed background knowledge.

It combines deep convolutional networks with a technique known as reinforcement learning in order to learn to play video games well (see also this followup).

The idea is to use the convolutional network to simplify the pixel data from the game screen, turning it into a simpler set of features, which can be used to decide which action to take: 'go left', 'go down', 'fire', and so on.

What is particularly interesting is that a single network learned to play seven different classic video games pretty well, outperforming human experts on three of the games.

But looking past the surface gloss, consider that this system is taking raw pixel data - it doesn't even know the game rules!

Google CEO Larry Page once described the perfect search engine as understanding exactly what [your queries] mean and giving you back exactly what you want.

In this vision, instead of responding to users' literal queries, search will use machine learning to take vague user input, discern precisely what was meant, and take action on the basis of those insights.

Over the next few decades, thousands of companies will build products which use machine learning to make user interfaces that can tolerate imprecision, while discerning and acting on the user's true intent.

Inspired user interface design is hard, and I expect many companies will take powerful machine learning technology and use it to build insipid user interfaces.

Machine learning, data science, and the virtuous circle of innovation: Of course, machine learning isn't just being used to build intention-driven interfaces.

But I do want to mention one consequence of this fashion that is not so often remarked: over the long run it's possible the biggest breakthrough in machine learning won't be any single conceptual breakthrough.

If a company can invest 1 dollar in machine learning research and get 1 dollar and 10 cents back reasonably rapidly, then a lot of money will end up in machine learning research.

So, for example, Conway's law suggests that the design of a Boeing 747 aircraft will mirror the extended organizational structure of Boeing and its contractors at the time the 747 was designed.

If the application's dashboard is supposed to be integrated with some machine learning algorithm, the person building the dashboard better be talking to the company's machine learning expert.

I won't define 'deep ideas' precisely, but loosely I mean the kind of idea which is the basis for a rich field of enquiry.

The backpropagation algorithm and the germ theory of disease are both good examples.: think of things like the germ theory of disease, for instance, or the understanding of how antibodies work, or the understanding that the heart, lungs, veins and arteries form a complete cardiovascular system.

Instead of a monolith, we have fields within fields within fields, a complex, recursive, self-referential social structure, whose organization mirrors the connections between our deepest insights.

Deep learning is the latest super-special weapon I've heard used in such arguments* *Interestingly, often not by leading experts in deep learning, who have been quite restrained.

And there is paper after paper leveraging the same basic set of ideas: using stochastic gradient descent (or a close variation) to optimize a cost function.

Part-2: Tensorflow tutorial-> Building a small Neural network based image classifier:

In this Tensorflow tutorial, we shall build a convolutional neural network based image classifier using Tensorflow.

To demonstrate how to build a convolutional neural network based image classifier, we shall build a 6 layer neural network that will identify and separate images of dogs from that of cats.

A neuron takes an input(say x), do some computation on it(say: multiply it with a variable w and adds another variable b ) to produce a value (say;

The networks which have many hidden layers tend to be more accurate and are called deep network and hence machine learning algorithms which uses these deep networks are called deep learning.

Typically, all the neurons in one layer, do similar kind of mathematical operations and that’s how that a layer gets its name(Except for input and output layers as they do little mathematical operations).

In order to calculate the dot product, it’s mandatory for the 3rd dimension of the filter to be same as the number of channels in the input.

when we calculate the dot product it’s a matrix multiplication of 5*5*3 sized chunk with 5*5*3 sized filter.

We shall slide convolutional filter over whole input image to calculate this output across the image as shown by a schematic below: In this case, we slide our window by 1 pixel at a time.

If you concatenate all these outputs in 2D, we shall have an output activation map of size 28*28(can you think of why 28*28 from 32*32 with the filter of 5*5 and stride of 1).

So, it’s a standard practice to add zeros on the boundary of the input layer such that the output is the same size as input layer.

So, in this example, if we add a padding of size 2 on both sides of the input layer, the size of the output layer will be 32*32*6 which works great from the implementation purpose as well.

Then, the output size will be: (N-F+2P)/S +1 Pooling layer is mostly used immediately after the convolutional layer to reduce the spatial size(only width and height, not depth).

The most common form of pooling is Max pooling where we take a filter of size F*F and apply the maximum operation over the F*F sized part of the image.

Then the output sizes w2*h2*d2 will be: w2= (w1-f)/S +1 h2=(h1-f)/S +1 d2=d1 Most common pooling is done with the filter of size 2*2 with a stride of 2.

For example, when we are trying to build the classifier between dog and cat, we are looking to find parameters such that output layer gives out probability of dog as 1(or at least higher than cat) for all images of dogs and probability of cat as 1((or at least higher than dog) for all images of cats.

So, let’s say, we start with some initial values of parameters and feed 1 training image(in reality multiple images are fed together) of dog and we calculate the output of the network as 0.1 for it being a dog and 0.9 of it being a cat.

There is a variable that is used to govern how fast do we change the parameters of the network during training, it’s called learning rate.  If you think about it, we want to maximise the total correct classifications by the network i.e.

So, we keep an eye on the cost and we keep doing many iterations of forward and backward propagations(10s of thousands sometimes) till cost stops decreasing.

Let’s say \(y_{prediction}\) is the vector containing the output of the network for all the training images and \(y_{actual}\) is the vector containing actual values(also called ground truth) of these labeled images.

So, we define the cost as the average of these distances for all the images: $$ cost=0.5 \sum_{i=0}^n (y_{actual}-y_{prediction})^2 $$ This is a very simple example of cost, but in actual training, we use much more complicated cost measures, like cross-entropy cost.

In production set-up when we get a new image of dog/cat to classify, we load this model in the same network architecture and calculate the probability of the new image being a cat/dog.

Rather, let’s say we have total 1600 images, we divide them in small batches say of size 16 or 32 called batch-size.

ii) Shape function: if you have multi-dimensional Tensor in TF, you can get the shape of it by doing this: output will be: array([ 16, 128, 128,   3], dtype=int32) You can reshape this to a new 2D Tensor of shape[16  128*128*3]= [16 49152].

containing real values to the same shaped vector of real values in the range of (0,1), whose sum is 1.

We shall apply the softmax function to the output of our convolutional neural network in order to, convert the output to the probability for each class.

have used 2000 images of dogs and cats each from Kaggle dataset but you could use any n image folders on your computer which contain different kinds of objects.

Typically, in the first convolutional layer, you pass n images of size width*height*num_channels, then this has the size [n width height num_channels] filter= trainable variables defining the filter.

If your filter is of size filter_size and input fed has num_input_channels and you have num_filters filters in your current layer, then filter will have following shape: [filter_size filter_size num_input_channels num_filters] strides= defines how much you move your filter when doing convolution.

Finally, we use a RELU as our activation function which simply takes the output of max_pool and applies RELU using tf.nn.relu All these operations are done in a single convolution layer.

We will use a simple cost that will be calculated using a Tensorflow function softmax_cross_entropy_with_logits which takes the output of last fully connected layer and actual labels to calculate cross_entropy whose average will give us the cost.

optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost) As you know, if we run optimizer operation inside session.run(), in order to calculate the value of cost, the whole network will have to be run and we will pass the training images in a feed_dict(Does that make sense?

In order to calculate the cost, the whole network(3 convolution+1 flattening+2 fc layers) will have to be executed to produce layer_fc2(which is required to calculate cross_entropy, hence cost).

acc = session.run(accuracy, feed_dict=feed_dict_train) As, training images along with labels are used for training, so in general training accuracy will be higher than validation.

After you are done with training, you shall notice that there are many new files in the folder: File dogs-cats-model.meta contains the complete network graph and we can use this to recreate the graph later.

pre-process the input image in the same way(as training), get hold of y_pred on the graph and pass it the new image in a feed dict.

Artificial neural network

Artificial neural networks (ANNs) or connectionist systems are computing systems vaguely inspired by the biological neural networks that constitute animal brains.[1] Such systems 'learn' to perform tasks by considering examples, generally without being programmed with any task-specific rules.

For example, in image recognition, they might learn to identify images that contain cats by analyzing example images that have been manually labeled as 'cat' or 'no cat' and using the results to identify cats in other images.

An ANN is based on a collection of connected units or nodes called artificial neurons which loosely model the neurons in a biological brain.

In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs.

Signals travel from the first layer (the input layer), to the last layer (the output layer), possibly after traversing the layers multiple times.

ANNs have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis.

With mathematical notation, Rosenblatt described circuitry not in the basic perceptron, such as the exclusive-or circuit that could not be processed by neural networks at the time.[8] In 1959, a biological model proposed by Nobel laureates Hubel and Wiesel was based on their discovery of two types of cells in the primary visual cortex: simple cells and complex cells.[9] The first functional networks with many layers were published by Ivakhnenko and Lapa in 1965, becoming the Group Method of Data Handling.[10][11][12] Neural network research stagnated after machine learning research by Minsky and Papert (1969),[13] who discovered two key issues with the computational machines that processed neural networks.

Much of artificial intelligence had focused on high-level (symbolic) models that are processed by using algorithms, characterized for example by expert systems with knowledge embodied in if-then rules, until in the late 1980s research expanded to low-level (sub-symbolic) machine learning, characterized by knowledge embodied in the parameters of a cognitive model.[citation needed] A

key trigger for renewed interest in neural networks and learning was Werbos's (1975) backpropagation algorithm that effectively solved the exclusive-or problem and more generally accelerated the training of multi-layer networks.

Backpropagation distributed the error term back up through the layers, by modifying the weights at each node.[8] In the mid-1980s, parallel distributed processing became popular under the name connectionism.

Rumelhart and McClelland (1986) described the use of connectionism to simulate neural processes.[14] Support vector machines and other, much simpler methods such as linear classifiers gradually overtook neural networks in machine learning popularity.

However, using neural networks transformed some domains, such as the prediction of protein structures.[15][16] In 1992, max-pooling was introduced to help with least shift invariance and tolerance to deformation to aid in 3D object recognition.[17][18][19] In 2010, Backpropagation training through max-pooling was accelerated by GPUs and shown to perform better than other pooling variants.[20] The vanishing gradient problem affects many-layered feedforward networks that used backpropagation and also recurrent neural networks (RNNs).[21][22] As errors propagate from layer to layer, they shrink exponentially with the number of layers, impeding the tuning of neuron weights that is based on those errors, particularly affecting deep networks.

To overcome this problem, Schmidhuber adopted a multi-level hierarchy of networks (1992) pre-trained one level at a time by unsupervised learning and fine-tuned by backpropagation.[23] Behnke (2003) relied only on the sign of the gradient (Rprop)[24] on problems such as image reconstruction and face localization.

(2006) proposed learning a high-level representation using successive layers of binary or real-valued latent variables with a restricted Boltzmann machine[25] to model each layer.

Once sufficiently many layers have been learned, the deep architecture may be used as a generative model by reproducing the data when sampling down the model (an 'ancestral pass') from the top level feature activations.[26][27] In 2012, Ng and Dean created a network that learned to recognize higher-level concepts, such as cats, only from watching unlabeled images taken from YouTube videos.[28] Earlier challenges in training deep neural networks were successfully addressed with methods such as unsupervised pre-training, while available computing power increased through the use of GPUs and distributed computing.

Nanodevices[29] for very large scale principal components analyses and convolution may create a new class of neural computing because they are fundamentally analog rather than digital (even though the first implementations may use digital devices).[30] Ciresan and colleagues (2010)[31] in Schmidhuber's group showed that despite the vanishing gradient problem, GPUs makes back-propagation feasible for many-layered feedforward neural networks.

Between 2009 and 2012, recurrent neural networks and deep feedforward neural networks developed in Schmidhuber's research group won eight international competitions in pattern recognition and machine learning.[32][33] For example, the bi-directional and multi-dimensional long short-term memory (LSTM)[34][35][36][37] of Graves et al.

won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three languages to be learned.[36][35] Ciresan and colleagues won pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition,[38] the ISBI 2012 Segmentation of Neuronal Structures in Electron Microscopy Stacks challenge[39] and others.

Their neural networks were the first pattern recognizers to achieve human-competitive or even superhuman performance[40] on benchmarks such as traffic sign recognition (IJCNN 2012), or the MNIST handwritten digits problem.

Researchers demonstrated (2010) that deep neural networks interfaced to a hidden Markov model with context-dependent states that define the neural network output layer can drastically reduce errors in large-vocabulary speech recognition tasks such as voice search.

GPU-based implementations[41] of this approach won many pattern recognition contests, including the IJCNN 2011 Traffic Sign Recognition Competition,[38] the ISBI 2012 Segmentation of neuronal structures in EM stacks challenge,[39] the ImageNet Competition[42] and others.

Deep, highly nonlinear neural architectures similar to the neocognitron[43] and the 'standard architecture of vision',[44] inspired by simple and complex cells, were pre-trained by unsupervised methods by Hinton.[45][26] A team from his lab won a 2012 contest sponsored by Merck to design software to help find molecules that might identify new drugs.[46] As of 2011, the state of the art in deep learning feedforward networks alternated convolutional layers and max-pooling layers,[41][47] topped by several fully or sparsely connected layers followed by a final classification layer.

Such supervised deep learning methods were the first to achieve human-competitive performance on certain tasks.[40] ANNs were able to guarantee shift invariance to deal with small and large natural objects in large cluttered scenes, only when invariance extended beyond shift, to all ANN-learned concepts, such as location, type (object class label), scale, lighting and others.

This was realized in Developmental Networks (DNs)[48] whose embodiments are Where-What Networks, WWN-1 (2008)[49] through WWN-7 (2013).[50] An artificial neural network is a network of simple elements called artificial neurons, which receive input, change their internal state (activation) according to that input, and produce output depending on the input and activation.

of predecessor neurons and typically has the form[51] The learning rule is a rule or an algorithm which modifies the parameters of the neural network, in order for a given input to the network to produce a favored output.

This learning process typically amounts to modifying the weights and thresholds of the variables within the network.[51] Neural network models can be viewed as simple mathematical models defining a function

A common use of the phrase 'ANN model' is really the definition of a class of such functions (where members of the class are obtained by varying parameters, connection weights, or specifics of the architecture such as the number of neurons or their connectivity).

i

∑

i

w

i

g

i

(

x

)

(commonly referred to as the activation function[52]) is some predefined function, such as the hyperbolic tangent or sigmoid function or softmax function or rectifier function.

i

1

2

n

∗

∗

∗

For applications where the solution is data dependent, the cost must necessarily be a function of the observations, otherwise the model would not relate to the data.

(

f

(

x

)

−

y

)

2

D

D

C

^

1

N

i

=

1

N

i

i

2

D

While it is possible to define an ad hoc cost function, frequently a particular cost (function) is used, either because it has desirable properties (such as convexity) or because it arises naturally from a particular formulation of the problem (e.g., in a probabilistic formulation the posterior probability of the model can be used as an inverse cost).

The basics of continuous backpropagation[10][53][54][55] were derived in the context of control theory by Kelley[56] in 1960 and by Bryson in 1961,[57] using principles of dynamic programming.

In 1962, Dreyfus published a simpler derivation based only on the chain rule.[58] Bryson and Ho described it as a multi-stage dynamic system optimization method in 1969.[59][60] In 1970, Linnainmaa finally published the general method for automatic differentiation (AD) of discrete connected networks of nested differentiable functions.[61][62] This corresponds to the modern version of backpropagation which is efficient even when the networks are sparse.[10][53][63][64] In 1973, Dreyfus used backpropagation to adapt parameters of controllers in proportion to error gradients.[65] In 1974, Werbos mentioned the possibility of applying this principle to ANNs,[66] and in 1982, he applied Linnainmaa's AD method to neural networks in the way that is widely used today.[53][67] In 1986, Rumelhart, Hinton and Williams noted that this method can generate useful internal representations of incoming data in hidden layers of neural networks.[68] In 1993, Wan was the first[10] to win an international pattern recognition contest through backpropagation.[69] The weight updates of backpropagation can be done via stochastic gradient descent using the following equation: where,

The choice of the cost function depends on factors such as the learning type (supervised, unsupervised, reinforcement, etc.) and the activation function.

For example, when performing supervised learning on a multiclass classification problem, common choices for the activation function and cost function are the softmax function and cross entropy function, respectively.

exp

⁡

(

x

j

)

∑

k

exp

⁡

(

x

k

)

The network is trained to minimize L2 error for predicting the mask ranging over the entire training set containing bounding boxes represented as masks.

commonly used cost is the mean-squared error, which tries to minimize the average squared error between the network's output,

Minimizing this cost using gradient descent for the class of neural networks called multilayer perceptrons (MLP), produces the backpropagation algorithm for training neural networks.

Tasks that fall within the paradigm of supervised learning are pattern recognition (also known as classification) and regression (also known as function approximation).

The cost function is dependent on the task (the model domain) and any a priori assumptions (the implicit properties of the model, its parameters and the observed variables).

2

whereas in statistical modeling, it could be related to the posterior probability of the model given the data (note that in both of those examples those quantities would be maximized rather than minimized).

t

t

t

The aim is to discover a policy for selecting actions that minimizes some measure of a long-term cost, e.g., the expected cumulative cost.

s

1

s

n

a

1

a

m

t

t

t

t

t

+

1

t

t

ANNs are frequently used in reinforcement learning as part of the overall algorithm.[77][78] Dynamic programming was coupled with ANNs (giving neurodynamic programming) by Bertsekas and Tsitsiklis[79] and applied to multi-dimensional nonlinear problems such as those involved in vehicle routing,[80] natural resources management[81][82] or medicine[83] because of the ability of ANNs to mitigate losses of accuracy even when reducing the discretization grid density for numerically approximating the solution of the original control problems.

In 2004 a recursive least squares algorithm was introduced to train CMAC neural network online.[84] This algorithm can converge in one step and update all weights in one step with any new input data.

Based on QR decomposition, this recursive learning algorithm was simplified to be O(N).[85] Training a neural network model essentially means selecting one model from the set of allowed models (or, in a Bayesian framework, determining a distribution over the set of allowed models) that minimizes the cost.

Backpropagation training algorithms fall into three categories: Evolutionary methods,[87] gene expression programming,[88] simulated annealing,[89] expectation-maximization, non-parametric methods and particle swarm optimization[90] are other methods for training neural networks.

It used a deep feedforward multilayer perceptron with eight layers.[92] It is a supervised learning network that grows layer by layer, where each layer is trained by regression analysis.

convolutional neural network (CNN) is a class of deep, feed-forward networks, composed of one or more convolutional layers with fully connected layers (matching those in typical ANNs) on top.

In particular, max-pooling[18] is often structured via Fukushima's convolutional architecture.[94] This architecture allows CNNs to take advantage of the 2D structure of input data.

CNNs are easier to train than other regular, deep, feed-forward neural networks and have many fewer parameters to estimate.[97] Examples of applications in computer vision include DeepDream[98] and robot navigation.[99] Long short-term memory (LSTM) networks are RNNs that avoid the vanishing gradient problem.[100] LSTM is normally augmented by recurrent gates called forget gates.[101] LSTM networks prevent backpropagated errors from vanishing or exploding.[21] Instead errors can flow backwards through unlimited numbers of virtual layers in space-unfolded LSTM.

That is, LSTM can learn 'very deep learning' tasks[10] that require memories of events that happened thousands or even millions of discrete time steps ago.

Stacks of LSTM RNNs[103] trained by Connectionist Temporal Classification (CTC)[104] can find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences.

In 2003, LSTM started to become competitive with traditional speech recognizers.[105] In 2007, the combination with CTC achieved first good results on speech data.[106] In 2009, a CTC-trained LSTM was the first RNN to win pattern recognition contests, when it won several competitions in connected handwriting recognition.[10][36] In 2014, Baidu used CTC-trained RNNs to break the Switchboard Hub5'00 speech recognition benchmark, without traditional speech processing methods.[107] LSTM also improved large-vocabulary speech recognition,[108][109] text-to-speech synthesis,[110] for Google Android,[53][111] and photo-real talking heads.[112] In 2015, Google's speech recognition experienced a 49% improvement through CTC-trained LSTM.[113] LSTM became popular in Natural Language Processing.

Unlike previous models based on HMMs and similar concepts, LSTM can learn to recognise context-sensitive languages.[114] LSTM improved machine translation,[115][116] language modeling[117] and multilingual language processing.[118] LSTM combined with CNNs improved automatic image captioning.[119] Deep Reservoir Computing and Deep Echo State Networks (deepESNs)[120][121] provide a framework for efficiently trained models for hierarchical processing of temporal data, while enabling the investigation of the inherent role of RNN layered composition.[clarification needed] A

This allows for both improved modeling and faster convergence of the fine-tuning phase.[123] Large memory storage and retrieval neural networks (LAMSTAR)[124][125] are fast deep learning neural networks of many layers that can use many filters simultaneously.

Its speed is provided by Hebbian link-weights[126] that integrate the various and usually different filters (preprocessing functions) into its many layers and to dynamically rank the significance of the various layers and functions relative to a given learning task.

This grossly imitates biological learning which integrates various preprocessors (cochlea, retina, etc.) and cortexes (auditory, visual, etc.) and their various regions.

Its deep learning capability is further enhanced by using inhibition, correlation and its ability to cope with incomplete data, or 'lost' neurons or layers even amidst a task.

The link-weights allow dynamic determination of innovation and redundancy, and facilitate the ranking of layers, of filters or of individual neurons relative to a task.

LAMSTAR has been applied to many domains, including medical[127][128][129] and financial predictions,[130] adaptive filtering of noisy speech in unknown noise,[131] still-image recognition,[132] video image recognition,[133] software security[134] and adaptive control of non-linear systems.[135] LAMSTAR had a much faster learning speed and somewhat lower error rate than a CNN based on ReLU-function filters and max pooling, in 20 comparative studies.[136] These applications demonstrate delving into aspects of the data that are hidden from shallow learning networks and the human senses, such as in the cases of predicting onset of sleep apnea events,[128] of an electrocardiogram of a fetus as recorded from skin-surface electrodes placed on the mother's abdomen early in pregnancy,[129] of financial prediction[124] or in blind filtering of noisy speech.[131] LAMSTAR was proposed in 1996 (A U.S. Patent 5,920,852 A) and was further developed Graupe and Kordylewski from 1997–2002.[137][138][139] A modified version, known as LAMSTAR 2, was developed by Schneider and Graupe in 2008.[140][141] The auto encoder idea is motivated by the concept of a good representation.

The whole process of auto encoding is to compare this reconstructed input to the original and try to minimize the error to make the reconstructed value as close as possible to the original.

This idea was introduced in 2010 by Vincent et al.[142] with a specific approach to good representation, a good representation is one that can be obtained robustly from a corrupted input and that will be useful for recovering the corresponding clean input.

x

~

x

~

x

~

x

~

x

~

might be either the cross-entropy loss with an affine-sigmoid decoder, or the squared error loss with an affine decoder.[142] In order to make a deep architecture, auto encoders stack.[143] Once the encoding function

of the first denoising auto encoder is learned and used to uncorrupt the input (corrupted input), the second level can be trained.[142] Once the stacked auto encoder is trained, its output can be used as the input to a supervised learning algorithm such as support vector machine classifier or a multi-class logistic regression.[142] A

deep stacking network (DSN)[144] (deep convex network) is based on a hierarchy of blocks of simplified neural network modules.

It was introduced in 2011 by Deng and Dong.[145] It formulates the learning as a convex optimization problem with a closed-form solution, emphasizing the mechanism's similarity to stacked generalization.[146] Each DSN block is a simple module that is easy to train by itself in a supervised fashion without backpropagation for the entire blocks.[147] Each block consists of a simplified multi-layer perceptron (MLP) with a single hidden layer.

Each block estimates the same final label class y, and its estimate is concatenated with original input X to form the expanded input for the next block.

It offers two important improvements: it uses higher-order information from covariance statistics, and it transforms the non-convex problem of a lower-layer to a convex sub-problem of an upper-layer.[148] TDSNs use covariance statistics in a bilinear mapping from each of two distinct sets of hidden units in the same layer to predictions, via a third-order tensor.

While parallelization and scalability are not considered seriously in conventional DNNs,[149][150][151] all learning for DSNs and TDSNs is done in batch mode, to allow parallelization.[145][144] Parallelization allows scaling the design to larger (deeper) architectures and data sets.

The need for deep learning with real-valued inputs, as in Gaussian restricted Boltzmann machines, led to the spike-and-slab RBM (ssRBM), which models continuous-valued inputs with strictly binary latent variables.[152] Similar to basic RBMs and its variants, a spike-and-slab RBM is a bipartite graph, while like GRBMs, the visible units (input) are real-valued.

A spike is a discrete probability mass at zero, while a slab is a density over continuous domain;[153] their mixture forms a prior.[154] An extension of ssRBM called µ-ssRBM provides extra modeling capacity using additional terms in the energy function.

Features can be learned using deep architectures such as DBNs,[26] DBMs,[155] deep auto encoders,[156] convolutional variants,[157][158] ssRBMs,[153] deep coding networks,[159] DBNs with sparse feature learning,[160] RNNs,[161] conditional DBNs,[162] de-noising auto encoders.[163] This provides a better representation, allowing faster learning and more accurate classification with high-dimensional data.

However, these architectures are poor at learning novel classes with few examples, because all network units are involved in representing the input (a distributed representation) and must be adjusted together (high degree of freedom).

It is a full generative model, generalized from abstract concepts flowing through the layers of the model, which is able to synthesize new examples in novel classes that look 'reasonably' natural.

All the levels are learned jointly by maximizing a joint log-probability score.[169] In a DBM with three hidden layers, the probability of a visible input ν is: where

deep predictive coding network (DPCN) is a predictive coding scheme that uses top-down information to empirically adjust the priors needed for a bottom-up inference procedure by means of a deep, locally connected, generative model.

DPCNs predict the representation of the layer, by using a top-down approach using the information in upper layer and temporal dependencies from previous states.[170] DPCNs can be extended to form a convolutional network.[170] Integrating external memory with ANNs dates to early research in distributed representations[171] and Kohonen's self-organizing maps.

For example, in sparse distributed memory or hierarchical temporal memory, the patterns encoded by neural networks are used as addresses for content-addressable memory, with 'neurons' essentially serving as address encoders and decoders.

Preliminary results demonstrate that neural Turing machines can infer simple algorithms such as copying, sorting and associative recall from input and output examples.

They out-performed Neural turing machines, long short-term memory systems and memory networks on sequence-processing tasks.[180][181][182][183][184] Approaches that represent previous experiences directly and use a similar experience to form a local model are often called nearest neighbour or k-nearest neighbors methods.[185] Deep learning is useful in semantic hashing[186] where a deep graphical model the word-count vectors[187] obtained from a large set of documents.[clarification needed] Documents are mapped to memory addresses in such a way that semantically similar documents are located at nearby addresses.

Unlike sparse distributed memory that operates on 1000-bit addresses, semantic hashing works on 32 or 64-bit addresses found in a conventional computer architecture.

These models have been applied in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base and the output is a textual response.[190] Deep neural networks can be potentially improved by deepening and parameter reduction, while maintaining trainability.

While training extremely deep (e.g., 1 million layers) neural networks might not be practical, CPU-like architectures such as pointer networks[191] and neural random-access machines[192] overcome this limitation by using external random-access memory and other components that typically belong to a computer architecture such as registers, ALU and pointers.

The key characteristic of these models is that their depth, the size of their short-term memory, and the number of parameters can be altered independently – unlike models like LSTM, whose number of parameters grows quadratically with memory size.

In that work, an LSTM RNN or CNN was used as an encoder to summarize a source sentence, and the summary was decoded using a conditional RNN language model to produce the translation.[196] These systems share building blocks: gated RNNs and CNNs and trained attention mechanisms.

For the sake of dimensionality reduction of the updated representation in each layer, a supervised strategy selects the best informative features among features extracted by KPCA.

more straightforward way to use kernel machines for deep learning was developed for spoken language understanding.[199] The main idea is to use a kernel machine to approximate a shallow neural net with an infinite number of hidden units, then use stacking to splice the output of the kernel machine and the raw input in building the next, higher level of the kernel machine.

The basic search algorithm is to propose a candidate model, evaluate it against a dataset and use the results as feedback to teach the NAS network.[200] Using ANNs requires an understanding of their characteristics.

ANN capabilities fall within the following broad categories:[citation needed] Because of their ability to reproduce and model nonlinear processes, ANNs have found many applications in a wide range of disciplines.

Application areas include system identification and control (vehicle control, trajectory prediction,[201] process control, natural resource management), quantum chemistry,[202] game-playing and decision making (backgammon, chess, poker), pattern recognition (radar systems, face identification, signal classification,[203] object recognition and more), sequence recognition (gesture, speech, handwritten and printed text recognition), medical diagnosis, finance[204] (e.g.

automated trading systems), data mining, visualization, machine translation, social network filtering[205] and e-mail spam filtering.

ANNs have been used to diagnose cancers, including lung cancer,[206] prostate cancer, colorectal cancer[207] and to distinguish highly invasive cancer cell lines from less invasive lines using only cell shape information.[208][209] ANNs have been used to accelerate reliability analysis of infrastructures subject to natural disasters.[210][211] ANNs have also been used for building black-box models in geoscience: hydrology,[212][213] ocean modelling and coastal engineering,[214][215] and geomorphology,[216] are just few examples of this kind.

They range from models of the short-term behavior of individual neurons,[217] models of how the dynamics of neural circuitry arise from interactions between individual neurons and finally to models of how behavior can arise from abstract neural modules that represent complete subsystems.

These include models of the long-term, and short-term plasticity, of neural systems and their relations to learning and memory from the individual neuron to the system level.

specific recurrent architecture with rational valued weights (as opposed to full precision real number-valued weights) has the full power of a universal Turing machine,[218] using a finite number of neurons and standard linear connections.

Further, the use of irrational values for weights results in a machine with super-Turing power.[219] Models' 'capacity' property roughly corresponds to their ability to model any given function.

It is related to the amount of information that can be stored in the network and to the notion of complexity.[citation needed] Models may not consistently converge on a single solution, firstly because many local minima may exist, depending on the cost function and the model.

However, for CMAC neural network, a recursive least squares algorithm was introduced to train it, and this algorithm can be guaranteed to converge in one step.[84] Applications whose goal is to create a system that generalizes well to unseen examples, face the possibility of over-training.

but also in statistical learning theory, where the goal is to minimize over two quantities: the 'empirical risk' and the 'structural risk', which roughly corresponds to the error over the training set and the predicted error in unseen data due to overfitting.

Supervised neural networks that use a mean squared error (MSE) cost function can use formal statistical methods to determine the confidence of the trained model.

A confidence analysis made this way is statistically valid as long as the output probability distribution stays the same and the network is not modified.

By assigning a softmax activation function, a generalization of the logistic function, on the output layer of the neural network (or a softmax component in a component-based neural network) for categorical target variables, the outputs can be interpreted as posterior probabilities.

common criticism of neural networks, particularly in robotics, is that they require too much training for real-world operation.[citation needed] Potential solutions include randomly shuffling training examples, by using a numerical optimization algorithm that does not take too large steps when changing the network connections following an example and by grouping examples in so-called mini-batches.

For example, by introducing a recursive least squares algorithm for CMAC neural network, the training process only takes one step to converge.[84] No neural network has solved computationally difficult problems such as the n-Queens problem, the travelling salesman problem, or the problem of factoring large integers.

Back propagation is a critical part of most artificial neural networks, although no such mechanism exists in biological neural networks.[220] How information is coded by real neurons is not known.

Sensor neurons fire action potentials more frequently with sensor activation and muscle cells pull more strongly when their associated motor neurons receive action potentials more frequently.[221] Other than the case of relaying information from a sensor neuron to a motor neuron, almost nothing of the principles of how information is handled by biological neural networks is known.

Alexander Dewdney commented that, as a result, artificial neural networks have a 'something-for-nothing quality, one that imparts a peculiar aura of laziness and a distinct lack of curiosity about just how good these computing systems are.

Weng[224] argued that the brain self-wires largely according to signal statistics and therefore, a serial cascade cannot catch all major statistical dependencies.

Large and effective neural networks require considerable computing resources.[225] While the brain has hardware tailored to the task of processing signals through a graph of neurons, simulating even a simplified neuron on von Neumann architecture may compel a neural network designer to fill many millions of database rows for its connections – which can consume vast amounts of memory and storage.

Schmidhuber notes that the resurgence of neural networks in the twenty-first century is largely attributable to advances in hardware: from 1991 to 2015, computing power, especially as delivered by GPGPUs (on GPUs), has increased around a million-fold, making the standard backpropagation algorithm feasible for training networks that are several layers deeper than before.[226] The use of parallel GPUs can reduce training times from months to days.[225] Neuromorphic engineering addresses the hardware difficulty directly, by constructing non-von-Neumann chips to directly implement neural networks in circuitry.

Another chip optimized for neural network processing is called a Tensor Processing Unit, or TPU.[227] Arguments against Dewdney's position are that neural networks have been successfully used to solve many complex and diverse tasks, ranging from autonomously flying aircraft[228] to detecting credit card fraud to mastering the game of Go.

Technology writer Roger Bridgman commented: Neural networks, for instance, are in the dock not only because they have been hyped to high heaven, (what hasn't?) but also because you could create a successful net without understanding how it worked: the bunch of numbers that captures its behaviour would in all probability be 'an opaque, unreadable table...valueless as a scientific resource'.

In spite of his emphatic declaration that science is not technology, Dewdney seems here to pillory neural nets as bad science when most of those devising them are just trying to be good engineers.

An unreadable table that a useful machine could read would still be well worth having.[229] Although it is true that analyzing what has been learned by an artificial neural network is difficult, it is much easier to do so than to analyze what has been learned by a biological neural network.

Furthermore, researchers involved in exploring learning algorithms for neural networks are gradually uncovering general principles that allow a learning machine to be successful.

For example, local vs non-local learning and shallow vs deep architecture.[230] Advocates of hybrid models (combining neural networks and symbolic approaches), claim that such a mixture can better capture the mechanisms of the human mind.[231][232] Artificial neural networks have many variations.

How to Make an Image Classifier - Intro to Deep Learning #6

We're going to make our own Image Classifier for cats & dogs in 40 lines of Python! First we'll go over the history of image classification, then we'll dive into the ...

Convolutional Neural Network (CNN) | Convolutional Neural Networks With TensorFlow | Edureka

TensorFlow Training - ) This Edureka "Convolutional Neural Network Tutorial" video (Blog: ..

Backpropagation Neural Network - How it Works e.g. Counting

Here's a small backpropagation neural network that counts and an example and an explanation for how it works, how it learns. A neural network is a tool in ...

Neural Network train in MATLAB

This video explain how to design and train a Neural Network in MATLAB.

TensorFlow Tutorial #07 Inception Model

How to use the pre-trained Inception Model to classify images.

Neural Network train and form recognition

This video disscusses the application of Neural Network in a simple form recognition task using binary images. The different steps from dataset peparation, ...

Lecture 9 | CNN Architectures

In Lecture 9 we discuss some common architectures for convolutional neural networks. We discuss architectures which performed well in the ImageNet ...

Build a TensorFlow Image Classifier in 5 Min

In this episode we're going to train our own image classifier to detect Darth Vader images. The code for this repository is here: ...

Deep Bilateral Learning for Real-Time Image Enhancement

Performance is a critical challenge in mobile image processing. Given a reference imaging pipeline, or even human-adjusted pairs of images, we seek to ...

Neural Net in C++ Tutorial on Vimeo