CHAPTER 6Ensemble Methods

Ensemble methods stem from the observation that multiple models give better performance than a single model if the models are somewhat independent of one another. A classifier that will give you the correct result 55% of the time is only mediocre, but if you’ve got 100 classifiers that are correct 55% of the time, the probability that the majority of them are right jumps to 82%. (Google “cumulative binomial probability” and try some numbers yourself.)

One way to get a variety of models that are somewhat independent from one another is to use different machine learning algorithms. For example, you can build models with support vector machine, linear regression, k nearest neighbors, binary decision tree, and so on. But it’s difficult to get a very long list of models that way. And, besides, it’s tedious because the different models all have different parameters that need to be tweaked separately and may have different requirements on the input data. So, the models need to be coded separately. That’s not going to work for generating hundreds or thousands of models (which you’ll be doing soon).

Therefore, the key with ensemble methods is to develop an algorithm approach to generate numerous somewhat independent models that will then be combined into an ensemble. In this chapter you learn how the most popular methods accomplish this. The chapter teaches you the mechanics of the most popular ensemble methods. It outlines the basic structure of the algorithms ...

Get Machine Learning in Python: Essential Techniques for Predictive Analysis now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.