AI News,

A recent paper with an innocent sounding title is probably the biggest news in neural networks since the invention of the backpropagation algorithm.

recent paper 'Intriguing properties of neural networks' by Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow and Rob Fergus, a team that includes authors from Google's deep learning research project outlines two pieces of news about the way neural networks behave that run counter to what we believed - and one of them is frankly astonishing.

For example, in a face recognizer a neuron might respond strongly to an image that has an eye or a nose - but notice there is no reason that the features should correspond to the neat labels that humans use.

That is, if you pick a random set of neurons and find the images that produce the maximum output on the set then these images are just as semantically similar as in the single neuron case.

That is, if you train a network to recognize a cat using a particular set of cat photos the network will, as long as it has been trained properly, have the ability to recognize a cat photo it hasn't seen before.

To create the slightly perturbed version you would simply modify each pixel value, and as long as the amount was small, then the cat photo would look exactly the same to a human - and presumably to a neural network.

What the researchers did was to invent an optimization algorithm that starts from a correctly classified example and tries to find a small perturbation in the pixel values that drives the output of the network to another classification.

To quote the paper: 'For all the networks we studied, for each sample, wealways manage to generate very close, visually indistinguishable, adversarial examples thatare misclassified by the original network.'

In the left-hand panel, the odd columns are correctly classified and the even columns are misclassified In the case of the right-hand panel everything is correctly classified and the even columns are random distortions of the originals.

'The above observations suggest that adversarial examples are somewhat universal and not just theresults of overfitting to a particular model or to the specific selection of the training set' This is perhaps the most remarkable part of the result.

If you change the situation just a little and ask what does it matter if a self-driving car that uses a deep neural network misclassifies a view of a pedestrian standing in front of the car as a clear road?

(The volume that is not near the surface drops exponentially with increasing dimension.) Given that the decision boundaries of a deep neural network are in a very high dimensional space it seems reasonable that most correctly classified examples are going to be close to the decision boundary - hence the ability to find a misclassified example close to the correct one, you simply have to work out the direction to the closest boundary.

Generative Adversarial Networks (GANs) - Computerphile

Artificial Intelligence where neural nets play against each other and improve enough to generate something new. Rob Miles explains GANs One of the papers ...

Lecture 7 | Training Neural Networks II

Lecture 7 continues our discussion of practical issues for training neural networks. We discuss different update rules commonly used to optimize neural networks ...

Central Nervous System: Crash Course A&P #11

Today Hank talks about your central nervous system. In this episode we'll explore how your brain develops and how important location is for each of your brain's ...

How good is your fit? - Ep. 21 (Deep Learning SIMPLIFIED)

A good model follows the “Goldilocks” principle in terms of data fitting. Models that underfit data will have poor accuracy, while models that overfit data will fail to ...

How computers learn to recognize objects instantly | Joseph Redmon

Ten years ago, researchers thought that getting a computer to tell the difference between a cat and a dog would be almost impossible. Today, computer vision ...

12a: Neural Nets

NOTE: These videos were recorded in Fall 2015 to update the Neural Nets portion of the class. MIT 6.034 Artificial Intelligence, Fall 2010 View the complete ...

Lecture 1.2: Gabriel Kreiman - Computational Roles of Neural Feedback

MIT RES.9-003 Brains, Minds and Machines Summer Course, Summer 2015 View the complete course: Instructor: Gabriel ..

Lecture 5 | Convolutional Neural Networks

In Lecture 5 we move from fully-connected neural networks to convolutional neural networks. We discuss some of the key historical milestones in the ...

Flexible Muscle-Based Locomotion for Bipedal Creatures

We present a control method for simulated bipeds, in which natural gaits are discovered through optimization. No motion capture or key frame animation was ...

Recurrent Neural Network - The Math of Intelligence (Week 5)

Recurrent neural networks let us learn from sequential data (time series, music, audio, video frames, etc ). We're going to build one from scratch in numpy ...