Developing an ML pipeline

The following example provides steps required for creating the machine learning pipeline and is used in the training process. After the model is trained, it is used for predictions that retrain the entire pipeline to automatically extract the features and predict the input data using Spark.

Create a Spark data frame for the input data by doing the following:

from pyspark.ml import Pipelinefrom pyspark.ml.classification import LogisticRegressionfrom pyspark.ml.feature import HashingTF, Tokenizerfrom pyspark.sql import SparkSessiontraining = spark.createDataFrame([        (0, "test iiot", 1.0),        (1, "validate", 0.0),        (2, "train iiot validate", 1.0),        (3, "gartner test", 0.0)    ], ["id", "data", "label"])training.show()

The training ...

Get Industrial Internet Application Development now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.