# AI News, Intuition of Gradient Descent for Machine Learning ## Intuition of Gradient Descent for Machine Learning

There are many powerful ML algorithms that use gradient descent such as linear regression, logistic regression, support vector machine (SVM) and neural networks.

When we think about gradient descent, we remember the terrible mathematical formula and see some horrible pictures of the 3D surface plot in our eyes like the following - How terrifying the picture is as well as it’s formulas!

Let, we have a function, f(x) = x² + 5 and we also have some values of x. x =−6,−5,−4,−3,−2,−1,0,1,2,3,4,5,6 Now, our task is to find the minimum value of the function.

We can write the above function and generate it’s output in Python code as below – Now let’s see, for the given values how the function looks like in graphical representation.

🤘 Now, if we put a ball on the upper edge of the plot and give a little push on it, as we done before in the pond picture.

It’ll tell us the direction of downward slope with the number of units to roll.You’ll find this formula written with different notation as follows – Now, let’s learn the meaning of the notations used in the formula- As we told before, we can set the value of γ (Learning rate) and we can also set the current position of x.

So, let’s differentiate our function f(x) = x² + 5 and see what we get- If we’ve forgotten calculus, nothing to worry.

In the following python code, the sympy package will find the derivative for our function f(x) = x² + 5.

Reasons to take the first derivative: 👉 The first derivative tells us the direction of the function, whether the function is increasing or decreasing.

The formula for gradient descent we discussed above tells us the direction and number of units to roll for the next position.

Let’s write the formula of the algorithm in a bit more nice way- Finally, we’ve known all the concepts that we need to write the code for gradient descent.

So, we’ve got out expected minimum value 0 for x which will cause the function output the minimum value 5.

Gradient descent, how neural networks learn | Chapter 2, deep learning

Subscribe for more (part 3 will be on backpropagation): Thanks to everybody supporting on Patreon

Introduction To Optimization: Gradient Based Algorithms

A conceptual overview of gradient based optimization algorithms. This video is part of an introductory optimization series.

Lecture 3 | Loss Functions and Optimization

Lecture 3 continues our discussion of linear classifiers. We introduce the idea of a loss function to quantify our unhappiness with a model's predictions, and ...

The Evolution of Gradient Descent

Which optimizer should we use to train our neural network? Tensorflow gives us lots of options, and there are way too many acronyms. We'll go over how the ...

The Hessian matrix

The Hessian matrix is a way of organizing all the second partial derivative information of a multivariable function.

Why the gradient is the direction of steepest ascent

The way we compute the gradient seems unrelated to its interpretation as the direction of steepest ascent. Here you can see how the two relate. About Khan ...

Neural Networks Demystified [Part 3: Gradient Descent]

Neural Networks Demystified @stephencwelch Supporting Code: Link to Yann's Talk: ..

How to Minimize Cost Function - Intro to Data Science

This video is part of an online course, Intro to Data Science. Check out the course here: This course was designed as ..