Chapter 19

Increasing Complexity with Linear and Nonlinear Tricks

In This Chapter

arrow Expanding your feature using polynomials

arrow Regularizing regression

arrow Learning from big data

arrow Using support vector machines

Previous chapters introduced you to some of the simplest, yet effective, machine-learning algorithms, such as linear and logistic regression, Naïve Bayes, and K-Nearest Neighbors (KNN). At this point, you can successfully complete a regression or classification project in data science. This chapter explores even more complex and powerful machine-learning techniques including the following: reasoning on how to enhance your data; controlling the variance of estimates by regularization; and managing to learn from big data by breaking it into manageable chunks.

This chapter also introduces you to the support vector machine (SVM), a powerful family of algorithms for classification and regression. SVMs are able to perform the most difficult data problems and are a perfect substitute for neural networks such as the multilayer perceptron, which isn’t currently present in the Scikit-learn package ...

Get Python for Data Science For Dummies now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.