The RDD API is a critical toolkit for Spark developers since it favors low-level control over the data within a functional programming paradigm. What makes RDDs powerful also makes it harder to work with for new programmers. While it may be easy to understand the RDD API and manual optimization techniques (for example, filter() before a groupBy() operation), writing advanced code would require consistent practice and fluency.
When data files, blocks, or data structures are converted to RDDs, the data is broken down into smaller units called partitions (similar to splits in Hadoop) and distributed among the nodes so they can be operated on in parallel at the same time. Spark provides this functionality right out ...