You are previewing Hadoop Operations.

Hadoop Operations

Cover of Hadoop Operations by Eric Sammer Published by O'Reilly Media, Inc.
  1. Hadoop Operations
  2. Dedication
  3. Preface
    1. Conventions Used in This Book
    2. Using Code Examples
    3. Safari® Books Online
    4. How to Contact Us
    5. Acknowledgments
  4. 1. Introduction
  5. 2. HDFS
    1. Goals and Motivation
    2. Design
    3. Daemons
    4. Reading and Writing Data
      1. The Read Path
      2. The Write Path
    5. Managing Filesystem Metadata
    6. Namenode High Availability
    7. Namenode Federation
    8. Access and Integration
      1. Command-Line Tools
      2. FUSE
      3. REST Support
  6. 3. MapReduce
    1. The Stages of MapReduce
    2. Introducing Hadoop MapReduce
      1. Daemons
      2. When It All Goes Wrong
    3. YARN
  7. 4. Planning a Hadoop Cluster
    1. Picking a Distribution and Version of Hadoop
      1. Apache Hadoop
      2. Cloudera’s Distribution Including Apache Hadoop
      3. Versions and Features
      4. What Should I Use?
    2. Hardware Selection
      1. Master Hardware Selection
      2. Worker Hardware Selection
      3. Cluster Sizing
      4. Blades, SANs, and Virtualization
    3. Operating System Selection and Preparation
      1. Deployment Layout
      2. Software
      3. Hostnames, DNS, and Identification
      4. Users, Groups, and Privileges
    4. Kernel Tuning
      1. vm.swappiness
      2. vm.overcommit_memory
    5. Disk Configuration
      1. Choosing a Filesystem
      2. Mount Options
    6. Network Design
      1. Network Usage in Hadoop: A Review
      2. 1 Gb versus 10 Gb Networks
      3. Typical Network Topologies
  8. 5. Installation and Configuration
    1. Installing Hadoop
      1. Apache Hadoop
      2. CDH
    2. Configuration: An Overview
      1. The Hadoop XML Configuration Files
    3. Environment Variables and Shell Scripts
    4. Logging Configuration
    5. HDFS
      1. Identification and Location
      2. Optimization and Tuning
      3. Formatting the Namenode
      4. Creating a /tmp Directory
    6. Namenode High Availability
      1. Fencing Options
      2. Basic Configuration
      3. Automatic Failover Configuration
      4. Format and Bootstrap the Namenodes
    7. Namenode Federation
    8. MapReduce
      1. Identification and Location
      2. Optimization and Tuning
    9. Rack Topology
    10. Security
  9. 6. Identity, Authentication, and Authorization
    1. Identity
    2. Kerberos and Hadoop
      1. Kerberos: A Refresher
      2. Kerberos Support in Hadoop
    3. Authorization
      1. HDFS
      2. MapReduce
      3. Other Tools and Systems
    4. Tying It Together
  10. 7. Resource Management
    1. What Is Resource Management?
    2. HDFS Quotas
    3. MapReduce Schedulers
      1. The FIFO Scheduler
      2. The Fair Scheduler
      3. The Capacity Scheduler
      4. The Future
  11. 8. Cluster Maintenance
    1. Managing Hadoop Processes
      1. Starting and Stopping Processes with Init Scripts
      2. Starting and Stopping Processes Manually
    2. HDFS Maintenance Tasks
      1. Adding a Datanode
      2. Decommissioning a Datanode
      3. Checking Filesystem Integrity with fsck
      4. Balancing HDFS Block Data
      5. Dealing with a Failed Disk
    3. MapReduce Maintenance Tasks
      1. Adding a Tasktracker
      2. Decommissioning a Tasktracker
      3. Killing a MapReduce Job
      4. Killing a MapReduce Task
      5. Dealing with a Blacklisted Tasktracker
  12. 9. Troubleshooting
    1. Differential Diagnosis Applied to Systems
    2. Common Failures and Problems
      1. Humans (You)
      2. Misconfiguration
      3. Hardware Failure
      4. Resource Exhaustion
      5. Host Identification and Naming
      6. Network Partitions
    3. “Is the Computer Plugged In?”
      1. E-SPORE
    4. Treatment and Care
    5. War Stories
      1. A Mystery Bottleneck
      2. There’s No Place Like 127.0.0.1
  13. 10. Monitoring
    1. An Overview
    2. Hadoop Metrics
      1. Apache Hadoop 0.20.0 and CDH3 (metrics1)
      2. Apache Hadoop 0.20.203 and Later, and CDH4 (metrics2)
      3. What about SNMP?
    3. Health Monitoring
      1. Host-Level Checks
      2. All Hadoop Processes
      3. HDFS Checks
      4. MapReduce Checks
  14. 11. Backup and Recovery
    1. Data Backup
      1. Distributed Copy (distcp)
      2. Parallel Data Ingestion
    2. Namenode Metadata
  15. A. Deprecated Configuration Properties
  16. Index
  17. About the Author
  18. Colophon
  19. Copyright
O'Reilly logo

Chapter 3. MapReduce

MapReduce refers to two distinct things: the programming model (covered here) and the specific implementation of the framework (covered later in Introducing Hadoop MapReduce). Designed to simplify the development of large-scale, distributed, fault-tolerant data processing applications, MapReduce is foremost a way of writing applications. In MapReduce, developers write jobs that consist primarily of a map function and a reduce function, and the framework handles the gory details of parallelizing the work, scheduling parts of the job on worker machines, monitoring for and recovering from failures, and so forth. Developers are shielded from having to implement complex and repetitious code and instead, focus on algorithms and business logic. User-provided code is invoked by the framework rather than the other way around. This is much like Java application servers that invoke servlets upon receiving an HTTP request; the container is responsible for setup and teardown as well as providing a runtime environment for user-supplied code. Similarly, as servlet authors need not implement the low-level details of socket I/O, event handling loops, and complex thread coordination, MapReduce developers program to a well-defined, simple interface and the “container” does the heavy lifting.

The idea of MapReduce was defined in a paper written by two Google engineers in 2004, titled "MapReduce: Simplified Data Processing on Large Clusters" (J. Dean, S. Ghemawat). The paper describes ...

The best content for your career. Discover unlimited learning on demand for around $1/day.