Optimizing the learning rate

If you recall from Chapter 2Machine Learning Definitions and Concepts, under the section Regularization on linear models, the Stochastic Gradient Descent (SGD) algorithm has a parameter called the learning rate.

The SGD is based on the idea of taking each new (block of) data sample to make little corrections to the linear regression model coefficients. At each iteration, the input data samples are used either on a sample-by-sample basis or on a block-by-block basis to estimate the best correction (the so-called gradient) to make to the linear regression coefficients to further reduce the estimation error. It has been shown that the SGD algorithm converges to an optimal solution for the linear regression weights. These ...

Get Effective Amazon Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.