The curse of dimensionality will apply to our machine learning algorithms because as the number of input dimensions gets larger, we will need more data to enable the algorithm to generalise sufficiently well. Our algorithms try to separate data into classes based on the features; therefore as the number of features increases, so will the number of datapoints we need. For this reason, we will often have to be careful about what information we give to the algorithm, meaning that we need to understand something about the data in advance.
Moreover, the number of data points has to grow much faster. Does it have to grow at an exponential rate of the number of features?
Share this highlighthttp://www.safaribooksonline.com/a/machine-learning-2nd/9207767/