AI News, Does Deep Learning Have Deep Flaws?
- On Sunday, June 3, 2018
- By Read More
Does Deep Learning Have Deep Flaws?
It is the space, rather than the individual units, that contains the semantic information in the high layer of neural networks.
The figures below compare the natural basis to the random basis on the convolutional neural network trained on MNIST, using ImageNet dataset as validation set.
For the natural basis (upper images), it sees an activation of a hidden unit as a feature and looks for input images which maximize the activation value of this single feature.
However, it turns out that if we pick a random set of basis, the images responded can also be semantically interpreted in a similar way.
For all the networks we studied (MNIST, QuocNet, AlexNet), for each sample, we always manage to generate very close, visually indistinguishable, adversarial examples that are misclassified by the original network.
The examples below are (left) correctly predicted samples, (right) adversarial examples, (center) 10*magnification of differences between them.
It has been supported by the experiments that if we keep a pool of adversarial examples and mix it into the original training set, the generalization will be improved.
The authors claimed, Although the set of adversarial negatives is dense, the probability is extremely low, so it is rarely observed in the test set.
- On Monday, September 23, 2019
Lecture 16 | Adversarial Examples and Adversarial Training
In Lecture 16, guest lecturer Ian Goodfellow discusses adversarial examples in deep learning. We discuss why deep networks and other machine learning ...
Breaking Deep Learning Systems With Adversarial Examples | Two Minute Papers #43
Artificial neural networks are computer programs that try to approximate what the human brain does to solve problems like recognizing objects in images. In this ...
TensorFlow Tutorial #11 Adversarial Examples
How to fool a neural network into mis-classifying images by adding a little 'specialized' noise. Demonstrated on the Inception model.
Generative Adversarial Nets - Fresh Machine Learning #2
This episode of Fresh Machine Learning is all about a relatively new concept called a Generative Adversarial Network. A model continuously tries to fool another ...
Generative Adversarial Networks (LIVE)
We're going to build a GAN to generate some images using Tensorflow. This will help you grasp the architecture and intuition behind adversarial approaches to ...
NDSS 2018 - Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Session 3A: Deep Learning and Adversarial ML - 04 Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks SUMMARY Although deep ...
Progressive Growing of GANs for Improved Quality, Stability, and Variation
Submission video of our paper, published at ICLR 2018. Please see the final version at Authors: Tero Karras (NVIDIA) Timo Aila ..
Generative Adversarial Networks (GANs) - Computerphile
Artificial Intelligence where neural nets play against each other and improve enough to generate something new. Rob Miles explains GANs One of the papers ...
'How neural networks learn' - Part II: Adversarial Examples
In this episode we dive into the world of adversarial examples: images specifically engineered to fool neural networks into making completely wrong decisions!
NIPS 2016 Workshop on Adversarial Training - Ian Goodfellow - Introduction to GANs