Stay ahead with the world's most comprehensive technology and business learning platform.

With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more.

Start Free Trial

No credit card required

O'Reilly logo
Ensemble Methods in Data Mining

Book Description

Ensemble methods have been called the most influential development in Data Mining and Machine Learning in the past decade. They combine multiple models into one usually more accurate than the best of its components. Ensembles can provide a critical boost to industrial challenges -- from investment timing to drug discovery, and fraud detection to recommendation systems -- where predictive accuracy is more vital than model interpretability. Ensembles are useful with all modeling algorithms, but this book focuses on decision trees to explain them most clearly. After describing trees and their strengths and weaknesses, the authors provide an overview of regularization -- today understood to be a key reason for the superior performance of modern ensembling algorithms. The book continues with a clear description of two recent developments: Importance Sampling (IS) and Rule Ensembles (RE). IS reveals classic ensemble methods -- bagging, random forests, and boosting -- to be special cases of a single algorithm, thereby showing how to improve their accuracy and speed. REs are linear rule models derived from decision tree ensembles. They are the most interpretable version of ensembles, which is essential to applications such as credit scoring and fault diagnosis. Lastly, the authors explain the paradox of how ensembles achieve greater accuracy on new data despite their (apparently much greater) complexity. This book is aimed at novice and advanced analytic researchers and practitioners -- especially in Engineering, Statistics, and Computer Science. Those with little exposure to ensembles will learn why and how to employ this breakthrough method, and advanced practitioners will gain insight into building even more powerful models. Throughout, snippets of code in R are provided to illustrate the algorithms described and to encourage the reader to try the techniques. The authors are industry experts in data mining and machine learning who are also adjunct professors and popular speakers. Although early pioneers in discovering and using ensembles, they here distill and clarify the recent groundbreaking work of leading academics (such as Jerome Friedman) to bring the benefits of ensembles to practitioners. Table of Contents: Ensembles Discovered / Predictive Learning and Decision Trees / Model Complexity, Model Selection and Regularization / Importance Sampling and the Classic Ensemble Methods / Rule Ensembles and Interpretation Statistics / Ensemble Complexity

Table of Contents

  1. Cover
  2. Synthesis Lectures on Data Mining and Knowledge Discovery
  3. Copyright
  4. Title Page
  5. Dedication
  6. Contents
  7. Acknowledgments
  8. Foreword by Jaffray Woodriff
  9. Foreword by Tin Kam Ho
  10. 1 Ensembles Discovered
    1. 1.1 Building Ensembles
    2. 1.2 Regularization
    3. 1.3 Real-World Examples: Credit Scoring + the Netflix Challenge
    4. 1.4 Organization of This Book
  11. 2 Predictive Learning and Decision Trees
    1. 2.1 Decision Tree Induction Overview
    2. 2.2 Decision Tree Properties
    3. 2.3 Decision Tree Limitations
  12. 3 Model Complexity, Model Selection and Regularization
    1. 3.1 What is the “Right” Size of a Tree?
    2. 3.2 Bias-Variance Decomposition
    3. 3.3 Regularization
      1. 3.3.1 Regularization and Cost-Complexity Tree Pruning
      2. 3.3.2 Cross-Validation
      3. 3.3.3 Regularization via Shrinkage
      4. 3.3.4 Regularization via Incremental Model Building
      5. 3.3.5 Example
      6. 3.3.6 Regularization Summary
  13. 4 Importance Sampling and the Classic Ensemble Methods
    1. 4.1 Importance Sampling
      1. 4.1.1 Parameter Importance Measure
      2. 4.1.2 Perturbation Sampling
    2. 4.2 Generic Ensemble Generation
    3. 4.3 Bagging
      1. 4.3.1 Example
      2. 4.3.2 Why it Helps?
    4. 4.4 Random Forest
    5. 4.5 AdaBoost
      1. 4.5.1 Example
      2. 4.5.2 Why the Exponential Loss?
      3. 4.5.3 AdaBoost’s Population Minimizer
    6. 4.6 Gradient Boosting
    7. 4.7 MART
    8. 4.8 Parallel vs. Sequential Ensembles
  14. 5 Rule Ensembles and Interpretation Statistics
    1. 5.1 Rule Ensembles
    2. 5.2 Interpretation
      1. 5.2.1 Simulated Data Example
      2. 5.2.2 Variable Importance
      3. 5.2.3 Partial Dependences
      4. 5.2.4 Interaction Statistic
    3. 5.3 Manufacturing Data Example
    4. 5.4 Summary
  15. 6 Ensemble Complexity
    1. 6.1 Complexity
    2. 6.2 Generalized Degrees of Freedom
    3. 6.3 Examples: Decision Tree Surface with Noise
    4. 6.4 R Code for GDF and Example
    5. 6.5 Summary and Discussion
  16. A AdaBoost Equivalence to FSF Procedure
  17. B Gradient Boosting and Robust Loss Functions
  18. Bibliography
  19. Authors’ Biographies