Chapter 9. Kernel Ridge Regression

Regression is probably one of the most ubiquitous tools in any machine learning toolkit. The idea is simple: fit a line to some data mapped from X to Y. You have probably seen lots of regressions already. In many ways, regression models the most common case and our naive base case. As you will see in this chapter, linear regression is a good starting point for predicting data but breaks down quickly when you’re trying to model data that has a low number of data points, or that isn’t linear.

We will first introduce the problem of collaborative filtering and recommendation algorithms, and then refine how we approach the problem until we reach ridge regression. Finally, at the end of the chapter, we will code our results and figure out whether our assumptions are correct.

Note

Regression, and by proxy the Kernel Ridge Regression algorithm, is a supervised learning method. It has little restriction on what it can solve but prefers to use continuous variables. It also has the benefit of evening out data and glossing over outliers.

Collaborative Filtering

If you use Amazon to buy things, then you have seen collaborative filtering in action. In Amazon’s case, it wants to recommend products of interest to you so that you end up buying more. So, for instance, if you buy lots of beer, then a good recommendation would be some beer for your consumption.

But where collaborative filtering becomes more interesting is how it relates to other users. Given the ...

Get Thoughtful Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.