How it works...

In this section, we investigated constructing a simple machine learning pipeline with Spark. We began with creating a DataFrame comprised of two groups of text documents and then proceeded to set up a pipeline.

First, we created a tokenizer to parse text documents into terms followed by the creation of the HashingTF to convert the terms into features. Then, we created a logistic regression object to predict which group a new text document belongs to.

Second, we constructed the pipeline by passing an array of arguments to it, specifying three stages of execution. You will notice each subsequent stage provides the result as a specified column, while using the previous stage's output column as the input.

Finally, we trained the ...

Get Apache Spark 2.x Machine Learning Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.