AI News, Machine Learning FAQ

Machine Learning FAQ

In essence, deep learning offers a set of techniques and algorithms that help us to parameterize deep neural network structures – artificial neural networks with many hidden layers and parameters. One

Now, to introduce deep learning, let us take a look at a more concrete example involving multi-layer perceptrons (MLPs): On a tangent: The term “perceptron” in MLPs may be a bit confusing since we don’t really want only linear neurons in our network.

if we add multiple hidden layers to this MLP, we’d also call the network “deep.” The problem with such “deep” networks is that it becomes tougher and tougher to learn “good” weights for this network.

During training, we then use the popular backpropagation algorithm (think of it as reverse-mode auto-differentiation) to propagate the “errors” from right to left and calculate the partial derivatives with respect to each weight to take a step into the opposite direction of the cost (or “error”) gradient.

Now, the problem with deep neural networks is the so-called “vanishing gradient” – the more layers we add, the harder it becomes to “update” our weights because the signal becomes weaker and weaker.

We can think of deep learning as algorithms for automatic “feature engineering,” or we could simply call them “feature detectors,” which help us to overcome the vanishing gradient challenge and facilitate the learning in neural networks with many layers.

We then connect those “receptive fields” (for example of the size of 5x5 pixel) with 1 unit in the next layer, this is the so-called “feature map.” After this mapping, we have constructed a so-called convolutional layer.

Via the convolutional layers we aim to extract the useful features from the images, and via the pooling layers, we aim to make the features somewhat equivariant to scale and translation.

Deep learning

Learning can be supervised, semi-supervised or unsupervised.[1][2][3] Deep learning architectures such as deep neural networks, deep belief networks and recurrent neural networks have been applied to fields including computer vision, speech recognition, natural language processing, audio recognition, social network filtering, machine translation, bioinformatics, drug design and board game programs, where they have produced results comparable to and in some cases superior to human experts.[4][5][6] Deep learning models are vaguely inspired by information processing and communication patterns in biological nervous systems yet have various differences from the structural and functional properties of biological brains, which make them incompatible with neuroscience evidences.[7][8][9] Deep learning is a class of machine learning algorithms that:[10](pp199–200) Most modern deep learning models are based on an artificial neural network, although they can also include propositional formulas[11] or latent variables organized layer-wise in deep generative models such as the nodes in Deep Belief Networks and Deep Boltzmann Machines.

Examples of deep structures that can be trained in an unsupervised manner are neural history compressors[13] and deep belief networks.[1][14] Deep neural networks are generally interpreted in terms of the universal approximation theorem[15][16][17][18][19] or probabilistic inference.[10][11][1][2][14][20][21] The universal approximation theorem concerns the capacity of feedforward neural networks with a single hidden layer of finite size to approximate continuous functions.[15][16][17][18][19] In 1989, the first proof was published by George Cybenko for sigmoid activation functions[16] and was generalised to feed-forward multi-layer architectures in 1991 by Kurt Hornik.[17] The probabilistic interpretation[20] derives from the field of machine learning.

More specifically, the probabilistic interpretation considers the activation nonlinearity as a cumulative distribution function.[20] The probabilistic interpretation led to the introduction of dropout as regularizer in neural networks.[22] The probabilistic interpretation was introduced by researchers including Hopfield, Widrow and Narendra and popularized in surveys such as the one by Bishop.[23] The term Deep Learning was introduced to the machine learning community by Rina Dechter in 1986,[24][13] and to Artificial Neural Networks by Igor Aizenberg and colleagues in 2000, in the context of Boolean threshold neurons.[25][26] The first general, working learning algorithm for supervised, deep, feedforward, multilayer perceptrons was published by Alexey Ivakhnenko and Lapa in 1965.[27] A 1971 paper described a deep network with 8 layers trained by the group method of data handling algorithm.[28] Other deep learning working architectures, specifically those built for computer vision, began with the Neocognitron introduced by Kunihiko Fukushima in 1980.[29] In 1989, Yann LeCun et al.

Each layer in the feature extraction module extracted features with growing complexity regarding the previous layer.[38] In 1995, Brendan Frey demonstrated that it was possible to train (over two days) a network containing six fully connected layers and several hundred hidden units using the wake-sleep algorithm, co-developed with Peter Dayan and Hinton.[39] Many factors contribute to the slow speed, including the vanishing gradient problem analyzed in 1991 by Sepp Hochreiter.[40][41] Simpler models that use task-specific handcrafted features such as Gabor filters and support vector machines (SVMs) were a popular choice in the 1990s and 2000s, because of ANNs' computational cost and a lack of understanding of how the brain wires its biological networks.

In 2003, LSTM started to become competitive with traditional speech recognizers on certain tasks.[51] Later it was combined with connectionist temporal classification (CTC)[52] in stacks of LSTM RNNs.[53] In 2015, Google's speech recognition reportedly experienced a dramatic performance jump of 49% through CTC-trained LSTM, which they made available through Google Voice Search.[54] In 2006, publications by Geoff Hinton, Ruslan Salakhutdinov, Osindero and Teh[55] [56][57] showed how a many-layered feedforward neural network could be effectively pre-trained one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine, then fine-tuning it using supervised backpropagation.[58] The papers referred to learning for deep belief nets.

It was believed that pre-training DNNs using generative models of deep belief nets (DBN) would overcome the main difficulties of neural nets.[69] However, it was discovered that replacing pre-training with large amounts of training data for straightforward backpropagation when using DNNs with large, context-dependent output layers produced error rates dramatically lower than then-state-of-the-art Gaussian mixture model (GMM)/Hidden Markov Model (HMM) and also than more-advanced generative model-based systems.[59][70] The nature of the recognition errors produced by the two types of systems was characteristically different,[71][68] offering technical insights into how to integrate deep learning into the existing highly efficient, run-time speech decoding system deployed by all major speech recognition systems.[10][72][73] Analysis around 2009-2010, contrasted the GMM (and other generative speech models) vs.

While there, Ng determined that GPUs could increase the speed of deep-learning systems by about 100 times.[79] In particular, GPUs are well-suited for the matrix/vector math involved in machine learning.[80][81] GPUs speed up training algorithms by orders of magnitude, reducing running times from weeks to days.[82][83] Specialized hardware and algorithm optimizations can be used for efficient processing.[84] In 2012, a team led by Dahl won the 'Merck Molecular Activity Challenge' using multi-task deep neural networks to predict the biomolecular target of one drug.[85][86] In 2014, Hochreiter's group used deep learning to detect off-target and toxic effects of environmental chemicals in nutrients, household products and drugs and won the 'Tox21 Data Challenge' of NIH, FDA and NCATS.[87][88][89] Significant additional impacts in image or object recognition were felt from 2011 to 2012.

DNNs have proven themselves capable, for example, of a) identifying the style period of a given painting, b) 'capturing' the style of a given painting and applying it in a visually pleasing manner to an arbitrary photograph, and c) generating striking imagery based on random visual input fields.[127][128] Neural networks have been used for implementing language models since the early 2000s.[101][129] LSTM helped to improve machine translation and language modeling.[102][103][104] Other key techniques in this field are negative sampling[130] and word embedding.

A compositional vector grammar can be thought of as probabilistic context free grammar (PCFG) implemented by an RNN.[131] Recursive auto-encoders built atop word embeddings can assess sentence similarity and detect paraphrasing.[131] Deep neural architectures provide the best results for constituency parsing,[132] sentiment analysis,[133] information retrieval,[134][135] spoken language understanding,[136] machine translation,[102][137] contextual entity linking,[137] writing style recognition,[138] Text classifcation and others.[139] Google Translate (GT) uses a large end-to-end long short-term memory network.[140][141][142][143][144][145] GNMT uses an example-based machine translation method in which the system 'learns from millions of examples.'[141] It translates 'whole sentences at a time, rather than pieces.

These failures are caused by insufficient efficacy (on-target effect), undesired interactions (off-target effects), or unanticipated toxic effects.[147][148] Research has explored use of deep learning to predict biomolecular target,[85][86] off-target and toxic effects of environmental chemicals in nutrients, household products and drugs.[87][88][89] AtomNet is a deep learning system for structure-based rational drug design.[149] AtomNet was used to predict novel candidate biomolecules for disease targets such as the Ebola virus[150] and multiple sclerosis.[151][152] Deep reinforcement learning has been used to approximate the value of possible direct marketing actions, defined in terms of RFM variables.

An autoencoder ANN was used in bioinformatics, to predict gene ontology annotations and gene-function relationships.[156] In medical informatics, deep learning was used to predict sleep quality based on data from wearables[157][158] and predictions of health complications from electronic health record data.[159] Deep learning has also showed efficacy in healthcare.[160][161] Finding the appropriate mobile audience for mobile advertising is always challenging, since many data points must be considered and assimilated before a target segment can be created and used in ad serving by any ad server.[162][163] Deep learning has been used to interpret large, many-dimensioned advertising datasets.

On the one hand, several variants of the backpropagation algorithm have been proposed in order to increase its processing realism.[170][171] Other researchers have argued that unsupervised forms of deep learning, such as those based on hierarchical generative models and deep belief networks, may be closer to biological reality.[172][173] In this respect, generative neural network models have been related to neurobiological evidence about sampling-based processing in the cerebral cortex.[174] Although a systematic comparison between the human brain organization and the neuronal encoding in deep networks has not yet been established, several analogies have been reported.

systems, like Watson (...) use techniques like deep learning as just one element in a very complicated ensemble of techniques, ranging from the statistical technique of Bayesian inference to deductive reasoning.'[187] As an alternative to this emphasis on the limits of deep learning, one author speculated that it might be possible to train a machine vision stack to perform the sophisticated task of discriminating between 'old master' and amateur figure drawings, and hypothesized that such a sensitivity might represent the rudiments of a non-trivial machine empathy.[188] This same author proposed that this would be in line with anthropology, which identifies a concern with aesthetics as a key element of behavioral modernity.[189] In further reference to the idea that artistic sensitivity might inhere within relatively low levels of the cognitive hierarchy, a published series of graphic representations of the internal states of deep (20-30 layers) neural networks attempting to discern within essentially random data the images on which they were trained[190] demonstrate a visual appeal: the original research notice received well over 1,000 comments, and was the subject of what was for a time the most frequently accessed article on The Guardian's[191] web site.

Some deep learning architectures display problematic behaviors,[192] such as confidently classifying unrecognizable images as belonging to a familiar category of ordinary images[193] and misclassifying minuscule perturbations of correctly classified images.[194] Goertzel hypothesized that these behaviors are due to limitations in their internal representations and that these limitations would inhibit integration into heterogeneous multi-component AGI architectures.[192] These issues may possibly be addressed by deep learning architectures that internally form states homologous to image-grammar[195] decompositions of observed entities and events.[192] Learning a grammar (visual or linguistic) from training data would be equivalent to restricting the system to commonsense reasoning that operates on concepts in terms of grammatical production rules and is a basic goal of both human language acquisition[196] and AI.[197] As deep learning moves from the lab into the world, research and experience shows that artificial neural networks are vulnerable to hacks and deception.

ANNs have been trained to defeat ANN-based anti-malware software by repeatedly attacking a defense with malware that was continually altered by a genetic algorithm until it tricked the anti-malware while retaining its ability to damage the target.[198] Another group demonstrated that certain sounds could make the Google Now voice command system open a particular web address that would download malware.[198] In “data poisoning”, false data is continually smuggled into a machine learning system’s training set to prevent it from achieving mastery.[198]

Convolutional neural network

In machine learning, a convolutional neural network (CNN, or ConvNet) is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery.

CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.[1] They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.[2][3] Convolutional networks were inspired by biological processes[4] in that the connectivity pattern between neurons resembles the organization of the animal visual cortex.

A very high number of neurons would be necessary, even in a shallow (opposite of deep) architecture, due to the very large input sizes associated with images, where each pixel is a relevant variable.

The convolution operation brings a solution to this problem as it reduces the number of free parameters, allowing the network to be deeper with fewer parameters.[8] For instance, regardless of image size, tiling regions of size 5 x 5, each with the same shared weights, requires only 25 learnable parameters.

In this way, it resolves the vanishing or exploding gradients problem in training traditional multi-layer neural networks with many layers by using backpropagation[citation needed].

Convolutional networks may include local or global pooling layers[clarification needed], which combine the outputs of neuron clusters at one layer into a single neuron in the next layer.[9][10] For example, max pooling uses the maximum value from each of a cluster of neurons at the prior layer.[11] Another example is average pooling, which uses the average value from each of a cluster of neurons at the prior layer[citation needed].

Work by Hubel and Wiesel in the 1950s and 1960s showed that cat and monkey visual cortexes contain neurons that individually respond to small regions of the visual field.

Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field[citation needed].

Their 1968 paper[12] identified two basic visual cell types in the brain: The neocognitron [13] was introduced in 1980.[11][14] The neocognitron does not require units located at multiple network positions to have the same trainable weights.

This idea appears in 1986 in the book version of the original backpropagation paper.[15]:Figure 14 Neocognitrons were developed in 1988 for temporal signals.[clarification needed][16] Their design was improved in 1998,[17] generalized in 2003[18] and simplified in the same year.[19] LeNet-5, a pioneering 7-level convolutional network by LeCun et al.

Similarly, a shift invariant neural network was proposed for image character recognition in 1988.[2][3] The architecture and training algorithm were modified in 1991[20] and applied for medical image processing[21] and automatic detection of breast cancer in mammograms.[22] A

This design was modified in 1989 to other de-convolution-based designs.[24][25] The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid[26] by lateral and feedback connections.

Following the 2005 paper that established the value of GPGPU for machine learning,[27] several publications described more efficient ways to train convolutional neural networks using GPUs.[28][29][30][31] In 2011, they were refined and implemented on a GPU, with impressive results.[9] In 2012, Ciresan et al.

significantly improved on the best performance in the literature for multiple image databases, including the MNIST database, the NORB database, the HWDB1.0 dataset (Chinese characters), the CIFAR10 dataset (dataset of 60000 32x32 labeled RGB images),[11] and the ImageNet dataset.[32] While traditional multilayer perceptron (MLP) models were successfully used for image recognition[example needed], due to the full connectivity between nodes they suffer from the curse of dimensionality, and thus do not scale well to higher resolution images.

For example, in CIFAR-10, images are only of size 32x32x3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in a first hidden layer of a regular neural network would have 32*32*3 = 3,072 weights.

Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together.

Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks.

The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume.

During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the entries of the filter and the input and producing a 2-dimensional activation map of that filter.

Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input and shares parameters with neurons in the same activation map.

When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account.

Convolutional networks exploit spatially local correlation by enforcing a local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume.

Since all neurons in a single depth slice share the same parameters, then the forward pass in each depth slice of the CONV layer can be computed as a convolution of the neuron's weights with the input volume (hence the name: convolutional layer).

The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume.

The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters and amount of computation in the network, and hence to also control overfitting.

The most common form is a pooling layer with filters of size 2x2 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations.

Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which works better in practice.[34] Due to the aggressive reduction in the size of the representation, the trend is towards using smaller filters[35] or discarding the pooling layer altogether.[36] Region of Interest pooling (also known as RoI pooling) is a variant of max pooling, in which output size is fixed and input rectangle is a parameter.[37] Pooling is an important component of convolutional neural networks for object detection based on Fast R-CNN[38] architecture.

Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next.

In stochastic pooling,[43] the conventional deterministic pooling operations are replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region.

Since the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting.

Since these networks are usually trained with all available data, one approach is to either generate new data from scratch (if possible) or perturb existing data to create new ones.

For example, input images could be asymmetrically cropped by a few percent to create new examples with the same label as the original.[45] One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur.

Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth.

Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting.

simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node.

after seeing a new shape once they can recognize it from a different viewpoint.[47] Currently, the common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc.

The pose relative to retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.[48] Thus, one way of representing something is to embed the coordinate frame within it.

The vectors of neuronal activity that represent pose ('pose vectors') allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints.

In 2012 an error rate of 0.23 percent on the MNIST database was reported.[11] Another paper on using CNN for image classification reported that the learning process was 'surprisingly fast';

in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database.[9] When applied to facial recognition, CNNs achieved a large decrease in error rate.[50] Another paper reported a 97.6 percent recognition rate on '5,600 still images of more than 10 subjects'.[4] CNNs were used to assess video quality in an objective way after manual training;

the resulting system had a very low root mean square error.[51] The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes.

The winner GoogLeNet[53] (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date.

That performance of convolutional neural networks on the ImageNet tests was close to that of humans.[54] The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand.

For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.

One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space.[56][57] Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream [58][59][60].

CNN models are effective for various NLP problems and achieved excellent results in semantic parsing,[63] search query retrieval,[64] sentence modeling,[65] classification,[66] prediction[67] and other traditional NLP tasks.[68] CNNs have been used in drug discovery.

In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based rational drug design.[69] The system trains directly on 3-dimensional representations of chemical interactions.

Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures,[70] AtomNet discovers chemical features, such as aromaticity, sp3 carbons and hydrogen bonding.

Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus[71] and multiple sclerosis.[72] CNNs have been used in the game of checkers.

The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the piece differential[clarify].

Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%.[73][74] It also earned a win against the program Chinook at its 'expert' level of play.[75] CNNs have been used in computer Go.

In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play.[76] Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player.

When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move.[77] A

couple of CNNs for choosing moves to try ('policy network') and evaluating positions ('value network') driving MCTS were used by AlphaGo, the first to beat the best human player at the time.[78] For many applications, little training data is available.

Other deep reinforcement learning models preceded it.[81] Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks.

time delay neural network allows speech signals to be processed time-invariantly, analogous to the translation invariance offered by CNNs.[84] They were introduced in the early 1980s.

This is a function of several factors, but the two most important ones are the size of the new dataset (small or big), and its similarity to the original dataset (e.g.

Keeping in mind that ConvNet features are more generic in early layers and more original-dataset-specific in later layers, here are some common rules of thumb for navigating the 4 major scenarios: Practical advice.

Neural Networks 8: hidden units = features

What is a Neural Network - Ep. 2 (Deep Learning SIMPLIFIED)

With plenty of machine learning tools currently available, why would you ever choose an artificial neural network over all the rest? This clip and the next could ...

Convolutional Neural Networks - Ep. 8 (Deep Learning SIMPLIFIED)

Out of all the current Deep Learning applications, machine vision remains one of the most popular. Since Convolutional Neural Nets (CNN) are one of the best ...

Layers in a Neural Network explained

In this video, we explain the concept of layers in a neural network and show how to create and specify layers in code with Keras. Follow deeplizard on Twitter: ...

Convolutional Networks

This video is part of the Udacity course "Deep Learning". Watch the full course at

Visualizing and Understanding Deep Neural Networks by Matt Zeiler

Matthew Zeiler, PhD, Founder and CEO of Clarifai Inc, speaks about large convolutional neural networks. These networks have recently demonstrated ...

How Convolutional Neural Networks work

A gentle guided tour of Convolutional Neural Networks. Come lift the curtain and see how the magic is done. For slides and text, check out the accompanying ...

Introduction to Deep Learning: What Are Convolutional Neural Networks?

Get free deep learning resources: Explore the basics behind convolutional neural networks (CNNs) in this MATLAB® Tech Talk. Broadly ..

Neural networks [7.3] : Deep learning - unsupervised pre-training

Recurrent Neural Networks - Ep. 9 (Deep Learning SIMPLIFIED)

Our previous discussions of deep net applications were limited to static patterns, but how can a net decipher and label patterns that change with time?