bradd patel thinks this is interesting: Hadoop uses HashPartitioner as the default partitioner implementation to calculate the distribution of the intermediate data to the Reducers. HashPartitione From Implementing a custom Hadoop key type from Hadoop MapReduce v2 Cookbook - Second Edition by Thilina Gunarathne Publisher: Packt Publishing Released: February 2015 Note means both the objects must be mapped to same memory address.. Share this highlight http://www.safaribooksonline.com/a/hadoop-mapreduce-v2/2512434/ Twitter Facebook Google Plus Email Get Instant Access Now Start a Free Trial Learn about Safari for Business Have an account? Sign in. Minimise Unlock the rest of Hadoop MapReduce v2 Cookbook - Second Edition and 30,000 other books and videos By clicking this box, you confirm that you have read and agree to the terms and conditions of our Membership Agreement, and you understand that when your trial period ends, you will be required to provide billing information if you wish to continue using the service. Unlock the rest of this book Start a Free 10-Day Trial loading Learn about Safari for Business Have an account? Sign in.