Introduction

Nearest-neighbor methods are rooted in a distance-based conceptual idea. We consider our training set a model, and make predictions on new points based on how close they are to points in the training set. A naive method is to make the prediction class the same as the closest training data point class. But since most datasets contain a degree of noise, a more common method is to take a weighted average of a set of k-nearest-neighbors. This method is called k-nearest-neighbors (k-NN).

Given a training dataset (x1,x2.....xn) with corresponding targets (y1, y2....yn), we can make a prediction on a point, z, by looking at a set of nearest-neighbors. The actual method of prediction depends on whether we are performing regression (continuous ...

Get TensorFlow Machine Learning Cookbook - Second Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.