In the previous recipes, we saw various steps of performing data analysis. In this recipe, let's download the Uber dataset and try to solve some of the analytical questions that arise on such data.
To step through this recipe, you will need a running Spark cluster in any one of the modes, that is, local, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer http://spark.apache.org/docs/latest/spark-standalone.html. Also, include the Spark MLlib package in the
build.sbt file so that it downloads the related libraries and the API can be used. Install Hadoop (optionally), Scala, and Java.
In this section, let's see how to analyse the Uber dataset.