Principal Component Analysis (PCA) to pick the most effective latent factor for machine learning in Spark

In this recipe, we use PCA (Principal Component Analysis) to map the higher-dimension data (the apparent dimensions) to a lower-dimensional space (actual dimensions). It is hard to believe, but PCA has its root as early as 1901(see K. Pearson's writings) and again independently in the 1930s by H. Hotelling.

PCA attempts to pick new components in a manner that maximizes the variance along perpendicular axes and effectively transforms high-dimensional original features to a lower-dimensional space with derived components that can explain the variation (discriminate classes) in a more concise form.

The intuition beyond PCA is depicted in ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.