Optimizing the level of parallelism

Optimizing the level of parallelism is very important to fully utilize the cluster capacity. In the case of HDFS, it means that the number of partitions is the same as the number of InputSplits, which is mostly the same as the number of blocks.

In this recipe, we will cover different ways to optimize the number of partitions.

How to do it…

Specify the number of partitions when loading a file into RDD with the following steps:

  1. Start the Spark shell:
    $ spark-shell
    
  2. Load the RDD with a custom number of partitions as a second parameter:
    scala> sc.textFile("hdfs://localhost:9000/user/hduser/words",10)
    

Another approach is to change the default parallelism by performing the following steps:

  1. Start the Spark shell with the ...

Get Spark Cookbook now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.