Gradient Boosting Regressor with LAD

More than a new technique, this is an ensemble of technologies already seen in this book, with a new loss function, the Least Absolute Deviations (LAD). With respect to the least square function, seen in the previous chapter, with LAD the L1 norm of the error is computed.

Regressor learners based on LAD are typically robust but unstable, because of the multiple minima of the loss function (leading therefore to multiple best solutions). Alone, this loss function seems to bear little value, but paired with gradient boosting, it creates a very stable regressor, due to the fact that boosting overcomes LAD regression limitations. With the code, this is very simple to achieve:

In: from sklearn.ensemble import GradientBoostingRegressor ...

Get Regression Analysis with Python now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.