AI News, What is the Difference Between Deep Learning and “Regular” Machine Learning?

What is the Difference Between Deep Learning and “Regular” Machine Learning?

The tl;dr version of this is: Deep learning is essentially a set of techniques that help we to parameterize deep neural network structures, neural networks with many, many layers and parameters.

For example, think of a log-sigmoid unit in our network as a logistic regression unit that returns continuous values outputs in the range 0-1.

During training, we then use the popular backpropagation algorithm (think of it as reverse-mode auto-differentiation) to propagate the 'errors' from right to left and calculate the partial derivatives with respect to each weight to take a step into the opposite direction of the cost (or 'error') gradient. Now, the problem with deep neural networks is the so-called 'vanishing gradient' -- the more layers we add, the harder it becomes to 'update' our weights because the signal becomes weaker and weaker.

Of course, there must be sufficient discriminatory information in our dataset, however, the performance of machine learning algorithms can suffer substantially when the information is buried in meaningless features.

it's about algorithms that do the feature engineering for us to provide deep neural network structures with meaningful information so that it can learn more effectively. We can think of deep learning as algorithms for automatic 'feature engineering,' or we could simply call them 'feature detectors,' which help us to overcome the vanishing gradient challenge and facilitate the learning in neural networks with many layers.

The idea is that if a feature detector is useful in one part of the imagine it is likely that it is useful somewhere else, but at the same time it allows each patch of image to be represented in several ways.

Via the convolutional layers we aim to extract the useful features from the images, and via the pooling layers, we aim to make the features somewhat equivariant to scale and translation.

What is a Neural Network - Ep. 2 (Deep Learning SIMPLIFIED)

With plenty of machine learning tools currently available, why would you ever choose an artificial neural network over all the rest? This clip and the next could ...

Recurrent Neural Networks - Ep. 9 (Deep Learning SIMPLIFIED)

Our previous discussions of deep net applications were limited to static patterns, but how can a net decipher and label patterns that change with time?

Artificial Neural Networks explained

In this video, we explain the concept of artificial neural networks and show how to create one (specifically, a multilayer perceptron or MLP) in code with Keras.

Neural Networks 8: hidden units = features

Neural Networks 6: solving XOR with a hidden layer

Lecture 1 | Building a Linear Classifier (MLP) With Deeplearning4j

Lecture by Instructor Tom Hanlon on Machine Learning. Tom provides an overview of how to build a simple neural net in this introductory tutorial.

How to Predict Stock Prices Easily - Intro to Deep Learning #7

We're going to predict the closing price of the S&P 500 using a special type of recurrent neural network called an LSTM network. I'll explain why we use ...

Your choice of Deep Net - Ep. 4 (Deep Learning SIMPLIFIED)

Deep Nets come in a large variety of structures and sizes, so how do you decide which kind to use? The answer depends on whether you are classifying objects ...

Neural Networks 7: universal approximation

Deep Learning Decall Fall 2017 Day 6: Autoencoders and Representation Learning

Day 6 of the Deep Learning Decal, hosted by Machine Learning at Berkeley. This lecture covers autoencoders and representation learning.