How it works...

We began by loading the downloaded book and tokenizing it via a regular expression. The next step was to convert all tokens to lowercase and exclude stop words from our token list, followed by filtering out any words less than two characters long.

The removal of stop words and words of a certain length reduce the number of features we have to process. It may not seem obvious, but the removal of particular words based on various processing criteria reduce the number of dimensions our machine learning algorithms will later process.

Finally, we sorted the resulting word count in descending order, taking the top 25, which we displayed a bar chart for.

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.