Summary

Ensemble learning is a method for generating highly accurate classifiers by combining weak or less accurate ones. In this chapter, we discussed some of the methods for constructing ensembles and went through the three fundamental reasons why ensemble methods are able to outperform any single classifier within the ensemble.

We discussed bagging and boosting in detail. Bagging, also known as Bootstrap Aggregation, generates the additional data that is used for training by using sub-sampling on the same dataset with replacement. We also learned why AdaBoost performs so well and understood in detail about random forests. Random forests are highly accurate and efficient algorithms that don't overfit. We also studied how and why they are considered ...

Get Julia for Data Science now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.