You are previewing Machine Learning for Hackers.

Machine Learning for Hackers

Cover of Machine Learning for Hackers by Drew Conway... Published by O'Reilly Media, Inc.
  1. Machine Learning for Hackers
  2. Preface
    1. Machine Learning for Hackers
    2. How This Book Is Organized
    3. Conventions Used in This Book
    4. Using Code Examples
    5. Safari® Books Online
    6. How to Contact Us
    7. Acknowledgements
  3. 1. Using R
    1. R for Machine Learning
      1. Downloading and Installing R
      2. IDEs and Text Editors
      3. Loading and Installing R Packages
      4. R Basics for Machine Learning
      5. Further Reading on R
  4. 2. Data Exploration
    1. Exploration versus Confirmation
    2. What Is Data?
    3. Inferring the Types of Columns in Your Data
    4. Inferring Meaning
    5. Numeric Summaries
    6. Means, Medians, and Modes
    7. Quantiles
    8. Standard Deviations and Variances
    9. Exploratory Data Visualization
    10. Visualizing the Relationships Between Columns
  5. 3. Classification: Spam Filtering
    1. This or That: Binary Classification
    2. Moving Gently into Conditional Probability
    3. Writing Our First Bayesian Spam Classifier
      1. Defining the Classifier and Testing It with Hard Ham
      2. Testing the Classifier Against All Email Types
      3. Improving the Results
  6. 4. Ranking: Priority Inbox
    1. How Do You Sort Something When You Don’t Know the Order?
    2. Ordering Email Messages by Priority
      1. Priority Features of Email
    3. Writing a Priority Inbox
      1. Functions for Extracting the Feature Set
      2. Creating a Weighting Scheme for Ranking
      3. Weighting from Email Thread Activity
      4. Training and Testing the Ranker
  7. 5. Regression: Predicting Page Views
    1. Introducing Regression
      1. The Baseline Model
      2. Regression Using Dummy Variables
      3. Linear Regression in a Nutshell
    2. Predicting Web Traffic
    3. Defining Correlation
  8. 6. Regularization: Text Regression
    1. Nonlinear Relationships Between Columns: Beyond Straight Lines
      1. Introducing Polynomial Regression
    2. Methods for Preventing Overfitting
      1. Preventing Overfitting with Regularization
    3. Text Regression
      1. Logistic Regression to the Rescue
  9. 7. Optimization: Breaking Codes
    1. Introduction to Optimization
    2. Ridge Regression
    3. Code Breaking as Optimization
  10. 8. PCA: Building a Market Index
    1. Unsupervised Learning
  11. 9. MDS: Visually Exploring US Senator Similarity
    1. Clustering Based on Similarity
      1. A Brief Introduction to Distance Metrics and Multidirectional Scaling
    2. How Do US Senators Cluster?
      1. Analyzing US Senator Roll Call Data (101st–111th Congresses)
  12. 10. kNN: Recommendation Systems
    1. The k-Nearest Neighbors Algorithm
    2. R Package Installation Data
  13. 11. Analyzing Social Graphs
    1. Social Network Analysis
      1. Thinking Graphically
    2. Hacking Twitter Social Graph Data
      1. Working with the Google SocialGraph API
    3. Analyzing Twitter Networks
      1. Local Community Structure
      2. Visualizing the Clustered Twitter Network with Gephi
      3. Building Your Own “Who to Follow” Engine
  14. 12. Model Comparison
    1. SVMs: The Support Vector Machine
    2. Comparing Algorithms
  15. Works Cited
    1. Books
    2. Articles
  16. Index
  17. About the Authors
  18. Colophon
  19. Copyright

Chapter 6. Regularization: Text Regression

Nonlinear Relationships Between Columns: Beyond Straight Lines

While we told you the truth in Chapter 5 when we said that linear regression assumes that the relationship between two variables is a straight line, it turns out you can also use linear regression to capture relationships that aren’t well-described by a straight line. To show you what we mean, imagine that you have the data shown in panel A of Figure 6-1.

Modeling nonlinear data: (A) visualizing nonlinear relationships; (B) nonlinear relationships and linear regression; (C) structured residuals; (D) results from a generalized additive model

Figure 6-1. Modeling nonlinear data: (A) visualizing nonlinear relationships; (B) nonlinear relationships and linear regression; (C) structured residuals; (D) results from a generalized additive model

It’s obvious from looking at this scatterplot that the relationship between X and Y isn’t well-described by a straight line. Indeed, plotting the regression line shows us exactly what will go wrong if we try to use a line to capture the pattern in this data; panel B of Figure 6-1 shows the result.

We can see that we make systematic errors in our predictions if we use a straight line: at small and large values of x, we overpredict y, and we underpredict y for medium values of x. This is easiest to see in a residuals plot, as shown in panel C of Figure 6-1. In this plot, you can see all of the structure of the original data set, as none of the structure is captured by the default linear regression model.

Using ggplot2’s geom_smooth function ...

The best content for your career. Discover unlimited learning on demand for around $1/day.