MapReduce

MapReduce (MR) is a programming paradigm at the core of Hadoop. It can scale the processing of data to massively high volumes. The data and processing can be distributed to hundreds and thousands of nodes for horizontal scalability. As the name suggests, the MR jobs contain two phases:

  • The map phase
  • The reduce phase

In the map phase, the dataset is divided into chunks and sent to an independent process to gather the result. These parallel mapper processes work independently on various available nodes in the cluster. Once their processing is completed (map task), the results are shuffled and sorted before initiating the reduce tasks. The reduce tasks once again run independently on the available nodes and the entire computation ...

Get Artificial Intelligence for Big Data now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.