In my previous article, I covered the intuition of gradient descent with the implementation of the mathematics behind it to reach the minimum value of cost function. It seemed as easy as walking down the hill (Yeah that’s what gradient descent is).

The last article was wound up with the line: sometimes, it may happen that instead of reaching global minima, the value of cost function gets stuck at the local minima or the saddle point.

Also I would like to show you something which may ruin your enthusiasm you had.

In Deep Learning, there are different types of cost…

Gradient Descent is the basic parameter optimization technique used in the field of machine learning. It is actually based on the slope of the cost function with respect to the parameter. Let’s consider an example :

You are an adventurous person and you got the chance to climb Mount Everest, Earth’s highest peak above sea-level. Your team starts climbing and somehow after a lot of effort you all reach the top. You all decided to celebrate this victory, so one of your friends popped a champagne bottle. …

Pradyumna Yadav

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store