Introducing polynomial regression

In two dimensions, where we have a predictor and an outcome, linear modeling is all about finding the best line that approximates your data. In three dimensions (two predictors and one outcome), the idea is then to find the best plane, or the best flat surface, that approximates your data. In the N dimension, the surface becomes an hyperplane, but the goal is always the same – to find the hyperplane of dimension N-1 that gives the best approximation for regression or that separates the classes the best for classification. That hyperplane is always flat.

Coming back to the very non-linear two-dimensional dataset we created, it is obvious that no line can properly approximate the relation between the predictor ...

Get Effective Amazon Machine Learning now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.