Loading and saving data

This recipe shows how Spark supports a wide range of input and output sources. Spark makes it very simple to load and save data in a large number of file formats. Formats range from unstructured, such as text, to semi-structured, such as JSON, to structured, such as SequenceFiles.

Getting ready

To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. Also, the reader is expected to have an understanding of text files, JSON, CSV, SequenceFiles, and object files.

How to do it…

  1. Load and save a text file as follows:
     val input = sc.textFile("hdfs://namenodeHostName:8020/repos/spark/README.md") val wholeInput = sc.wholeTextFiles("file://home/padma/salesFiles") ...

Get Apache Spark for Data Science Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.