Introduction

The curse of dimensionality is not a new term or concept. The term was originally coined by R. Bellman when tackling problems in dynamic programming (the Bellman equation). The core concepts in machine learning refer to the problem that as we increase the number of dimensions (axes or features), the number of training data (samples) remains the same (or relatively low), which causes less accuracy in our predictions. This phenomenon is also referred to as the Hughes Effect, named after G. Hughes, which talks about the problem caused by rapid (exponential) increase of search space as we introduce more and more dimensions to the problem space. It is a bit counterintuitive, but if the number of samples does not expand at the same ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.