Creating a Spark-based Notebook and creating the Spark session

Zeppelin out of the box comes with a Spark interpreter that will automatically create a Spark session for the Notebook. You can access the Spark session by using Spark as shown in the following lines:

Load the time series data in CSV file format and create a Spark data frame:

val tempDF = spark.read.option("header", "true").csv("/Users/k/samples/testdata/temperature.csv")

After the data is loaded, Spark creates a data frame. A data frame is a columnar representation of the data in-memory that provides the flexibility to run the SQL statements.

Selecting the data from a Spark data frame:

val data = tempDF.select(tempDF("Date").as("timestamp"), tempDF("temperature").as("temperature"), ...

Get Industrial Internet Application Development now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.