Stochastic Gradient Descent

As we have seen, in the GD algorithm, we calculate the gradient of the cost function on the complete set of data at our disposal; this is why it is also called batch GD. If the dataset is very large, GD usage can be quite expensive, as we only take a single step for one pass over the whole dataset. So the bigger the dataset, the slower our algorithm is at refreshing the weights, and the longer it will take to converge to the minimum global cost.

The SGD algorithm is a simplification of the GD algorithm. Instead of calculating the gradient exactly, for each iteration, the gradient of one of the randomly selected observations is used.

The term stochastic derives from the fact that the gradient based on a single training ...

Get Regression Analysis with R now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.