You are previewing Machine Learning for Hackers.

Machine Learning for Hackers

Cover of Machine Learning for Hackers by Drew Conway... Published by O'Reilly Media, Inc.
  1. Machine Learning for Hackers
  2. Preface
    1. Machine Learning for Hackers
    2. How This Book Is Organized
    3. Conventions Used in This Book
    4. Using Code Examples
    5. Safari® Books Online
    6. How to Contact Us
    7. Acknowledgements
  3. 1. Using R
    1. R for Machine Learning
      1. Downloading and Installing R
      2. IDEs and Text Editors
      3. Loading and Installing R Packages
      4. R Basics for Machine Learning
      5. Further Reading on R
  4. 2. Data Exploration
    1. Exploration versus Confirmation
    2. What Is Data?
    3. Inferring the Types of Columns in Your Data
    4. Inferring Meaning
    5. Numeric Summaries
    6. Means, Medians, and Modes
    7. Quantiles
    8. Standard Deviations and Variances
    9. Exploratory Data Visualization
    10. Visualizing the Relationships Between Columns
  5. 3. Classification: Spam Filtering
    1. This or That: Binary Classification
    2. Moving Gently into Conditional Probability
    3. Writing Our First Bayesian Spam Classifier
      1. Defining the Classifier and Testing It with Hard Ham
      2. Testing the Classifier Against All Email Types
      3. Improving the Results
  6. 4. Ranking: Priority Inbox
    1. How Do You Sort Something When You Don’t Know the Order?
    2. Ordering Email Messages by Priority
      1. Priority Features of Email
    3. Writing a Priority Inbox
      1. Functions for Extracting the Feature Set
      2. Creating a Weighting Scheme for Ranking
      3. Weighting from Email Thread Activity
      4. Training and Testing the Ranker
  7. 5. Regression: Predicting Page Views
    1. Introducing Regression
      1. The Baseline Model
      2. Regression Using Dummy Variables
      3. Linear Regression in a Nutshell
    2. Predicting Web Traffic
    3. Defining Correlation
  8. 6. Regularization: Text Regression
    1. Nonlinear Relationships Between Columns: Beyond Straight Lines
      1. Introducing Polynomial Regression
    2. Methods for Preventing Overfitting
      1. Preventing Overfitting with Regularization
    3. Text Regression
      1. Logistic Regression to the Rescue
  9. 7. Optimization: Breaking Codes
    1. Introduction to Optimization
    2. Ridge Regression
    3. Code Breaking as Optimization
  10. 8. PCA: Building a Market Index
    1. Unsupervised Learning
  11. 9. MDS: Visually Exploring US Senator Similarity
    1. Clustering Based on Similarity
      1. A Brief Introduction to Distance Metrics and Multidirectional Scaling
    2. How Do US Senators Cluster?
      1. Analyzing US Senator Roll Call Data (101st–111th Congresses)
  12. 10. kNN: Recommendation Systems
    1. The k-Nearest Neighbors Algorithm
    2. R Package Installation Data
  13. 11. Analyzing Social Graphs
    1. Social Network Analysis
      1. Thinking Graphically
    2. Hacking Twitter Social Graph Data
      1. Working with the Google SocialGraph API
    3. Analyzing Twitter Networks
      1. Local Community Structure
      2. Visualizing the Clustered Twitter Network with Gephi
      3. Building Your Own “Who to Follow” Engine
  14. 12. Model Comparison
    1. SVMs: The Support Vector Machine
    2. Comparing Algorithms
  15. Works Cited
    1. Books
    2. Articles
  16. Index
  17. About the Authors
  18. Colophon
  19. Copyright
O'Reilly logo

Chapter 10. kNN: Recommendation Systems

The k-Nearest Neighbors Algorithm

In the last chapter, we saw how we could use simple correlational techniques to create a measure of similarity between the members of Congress based on their voting records. In this chapter, we’re going to talk about how you can use those same sort of similarity metrics to recommend items to a website’s users.

The algorithm we’ll use is called k-nearest neighbors. It’s arguably the most intuitive of all the machine learning algorithms that we present in this book. Indeed, the simplest form of k-nearest neighbors is the sort of algorithm most people would spontaneously invent if asked to make recommendations using similarity data: they’d recommend the song that’s closest to the songs a user already likes, but not yet in that list. That intuition is essentially a 1-nearest neighbor algorithm. The full k-nearest neighbor algorithm amounts to a generalization of this intuition where you draw on more than one data point before making a recommendation.

The full k-nearest neighbors algorithm works much in the way some of us ask for recommendations from our friends. First, we start with people whose taste we feel we share, and then we ask a bunch of them to recommend something to us. If many of them recommend the same thing, we deduce that we’ll like it as well.

How can we take that intuition and transform it into something algorithmic? Before we work on making recommendations based on real data, let’s start with something ...

The best content for your career. Discover unlimited learning on demand for around $1/day.