You are previewing Professional NoSQL.

Professional NoSQL

Cover of Professional NoSQL by Shashank Tiwari Published by Wrox
  1. Cover
  2. Contents
  3. Introduction
  4. Part I: Getting Started
    1. Chapter 1: NoSQL: What It Is and Why You Need it
      1. Definition and Introduction
      2. Sorted Ordered Column-Oriented Stores
      3. Key/Value Stores
      4. Document Databases
      5. Graph Databases
      6. Summary
    2. Chapter 2: Hello NoSQL: Getting Initial Hands-on Experience
      1. First Impressions — Examining Two Simple Examples
      2. Working with Language Bindings
      3. Summary
    3. Chapter 3: Interfacing and Interacting with NoSQL
      1. If No SQL, Then What?
      2. Language Bindings for NoSQL Data Stores
      3. Summary
  5. Part II: Learning the NoSQL Basics
    1. Chapter 4: Understanding the Storage Architecture
      1. Working with Column-Oriented Databases
      2. HBase Distributed Storage Architecture
      3. Document Store Internals
      4. Understanding Key/Value Stores in Memcached and Redis
      5. Eventually Consistent Non-relational Databases
      6. Summary
    2. Chapter 5: Performing CRUD Operations
      1. Creating Records
      2. Accessing Data
      3. Updating and Deleting Data
      4. Summary
    3. Chapter 6: Querying NoSQL Stores
      1. Similarities Between SQL and MongoDB Query Features
      2. Accessing Data from Column-Oriented Databases Like HBase
      3. Querying Redis Data Stores
      4. Summary
    4. Chapter 7: Modifying Data Stores and Managing Evolution
      1. Changing Document Databases
      2. Schema Evolution in Column-Oriented Databases
      3. HBase Data Import and Export
      4. Data Evolution in Key/Value Stores
      5. Summary
    5. Chapter 8: Indexing and Ordering Data Sets
      1. Essential Concepts Behind a Database Index
      2. Indexing and Ordering in MongoDB
      3. Creating and Using Indexes in MongoDB
      4. Indexing and Ordering in CouchDB
      5. Indexing in Apache Cassandra
      6. Summary
    6. Chapter 9: Managing Transactions and Data Integrity
      2. Distributed ACID Systems
      3. Upholding CAP
      4. Consistency Implementations in a Few NoSQL Products
      5. Summary
  6. Part III: Gaining Proficiency with NoSQL
    1. Chapter 10: Using NoSQL in the Cloud
      1. Google App Engine Data Store
      2. Amazon SimpleDB
      3. Summary
    2. Chapter 11: Scalable Parallel Processing with MapReduce
      1. Understanding MapReduce
      2. MapReduce with HBase
      3. MapReduce Possibilities and Apache Mahout
      4. Summary
    3. Chapter 12: Analyzing Big Data with Hive
      1. Hive Basics
      2. Back to Movie Ratings
      3. Good Old SQL
      4. JOIN(s) in Hive QL
      5. Summary
    4. Chapter 13: Surveying Database Internals
      1. MongoDB Internals
      2. Membase Architecture
      3. Hypertable Under the Hood
      4. Apache Cassandra
      5. Berkeley DB
      6. Summary
  7. Part IV: Mastering NoSQL
    1. Chapter 14: Choosing Among NoSQL Flavors
      1. Comparing NoSQL Products
      2. Benchmarking Performance
      3. Contextual Comparison
      4. Summary
    2. Chapter 15: Coexistence
      1. Using MySQL as a NoSQL Solution
      2. Mostly Immutable Data Stores
      3. Web Frameworks and NoSQL
      4. Migrating from RDBMS to NoSQL
      5. Summary
    3. Chapter 16: Performance Tuning
      1. Goals of Parallel Algorithms
      2. Influencing Equations
      3. Partitioning
      4. Scheduling in Heterogeneous Environments
      5. Additional MapReduce Tuning
      6. HBase Coprocessors
      7. Leveraging Bloom Filters
      8. Summary
    4. Chapter 17: Tools and Utilities
      1. RRDTool
      2. Nagios
      3. Scribe
      4. Flume
      5. Chukwa
      6. Pig
      7. Nodetool
      8. OpenTSDB
      9. Solandra
      10. Hummingbird and C5t
      11. GeoCouch
      12. Alchemy Database
      13. Webdis
      14. Summary
  8. Appendix: Installation and Setup Instructions

Chapter 11

Scalable Parallel Processing with MapReduce


  • Understanding the challenges of scalable parallel processing
  • Leveraging MapReduce for large scale parallel processing
  • Exploring the concepts and nuances of the MapReduce computational model
  • Getting hands-on MapReduce experience using MongoDB, CouchDB, and HBase
  • Introducing Mahout, a MapReduce-based machine learning infrastructure

Manipulating large amounts of data requires tools and methods that can run operations in parallel with as few as possible points of intersection among them. Fewer points of intersection lead to fewer potential conflicts and less management. Such parallel processing tools also need to keep data transfer to a minimum. I/O and bandwidth can often become bottlenecks that impede fast and efficient processing. With large amounts of data the I/O bottlenecks can be amplified and can potentially slow down a system to a point where it becomes impractical to use it. Therefore, for large-scale computations, keeping data local to a computation is of immense importance. Given these considerations, manipulating large data sets spread out across multiple machines is neither trivial nor easy.

Over the years, many methods have been developed to compute large data sets. Initially, innovation was focused around building super computers. Super computers are meant to be super-powerful machines with greater-than-normal processing capabilities. These machines work well for specific and complicated ...

The best content for your career. Discover unlimited learning on demand for around $1/day.