Chunking is shallow parsing, where instead of retrieving deep structure of the sentence, we try to club some chunks of the sentences that constitute some meaning. A chunk is defined as the minimal unit that can be processed. The conventional pipeline in chunking is to tokenize the POS tag and the input string, before they are given to any chunker.
To step through this recipe, you will need a running Spark cluster either in pseudo distributed mode or in one of the distributed modes, that is, standalone, YARN, or Mesos. For installing Spark on a standalone cluster, please refer to http://spark.apache.org/docs/latest/spark-standalone.html. Install Hadoop (optionally), Scala, and Java.