Basic statistics

Let's read the car mileage data and then compute some basic statistics. In Spark 2.0.0, DataFrameReader has the capability to read CSV files and create Datasets. And the Dataset has the describe() function, which calculates the count, mean, standard deviation, min, and max values. For correlation and covariance, we use the stat.corr() and stat.cov() methods. Spark 2.0.0 Datasets have made our statistics work a lot easier.

Now let's run the program, parse the code, and compare the results.

The code files are in fdps-v3/code and the data files in fdps-v3/data. You can run the code either from a Scala IDE or just from the Spark shell startup.

Start the Spark shell from the bin directory where you have installed Spark:

/Volumes/sdxc-01/spark-2.0.0/bin/spark-shell ...

Get Fast Data Processing with Spark 2 - Third Edition now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.