You are previewing Hadoop Operations.

Hadoop Operations

Cover of Hadoop Operations by Eric Sammer Published by O'Reilly Media, Inc.
  1. Hadoop Operations
  2. Dedication
  3. Preface
    1. Conventions Used in This Book
    2. Using Code Examples
    3. Safari® Books Online
    4. How to Contact Us
    5. Acknowledgments
  4. 1. Introduction
  5. 2. HDFS
    1. Goals and Motivation
    2. Design
    3. Daemons
    4. Reading and Writing Data
      1. The Read Path
      2. The Write Path
    5. Managing Filesystem Metadata
    6. Namenode High Availability
    7. Namenode Federation
    8. Access and Integration
      1. Command-Line Tools
      2. FUSE
      3. REST Support
  6. 3. MapReduce
    1. The Stages of MapReduce
    2. Introducing Hadoop MapReduce
      1. Daemons
      2. When It All Goes Wrong
    3. YARN
  7. 4. Planning a Hadoop Cluster
    1. Picking a Distribution and Version of Hadoop
      1. Apache Hadoop
      2. Cloudera’s Distribution Including Apache Hadoop
      3. Versions and Features
      4. What Should I Use?
    2. Hardware Selection
      1. Master Hardware Selection
      2. Worker Hardware Selection
      3. Cluster Sizing
      4. Blades, SANs, and Virtualization
    3. Operating System Selection and Preparation
      1. Deployment Layout
      2. Software
      3. Hostnames, DNS, and Identification
      4. Users, Groups, and Privileges
    4. Kernel Tuning
      1. vm.swappiness
      2. vm.overcommit_memory
    5. Disk Configuration
      1. Choosing a Filesystem
      2. Mount Options
    6. Network Design
      1. Network Usage in Hadoop: A Review
      2. 1 Gb versus 10 Gb Networks
      3. Typical Network Topologies
  8. 5. Installation and Configuration
    1. Installing Hadoop
      1. Apache Hadoop
      2. CDH
    2. Configuration: An Overview
      1. The Hadoop XML Configuration Files
    3. Environment Variables and Shell Scripts
    4. Logging Configuration
    5. HDFS
      1. Identification and Location
      2. Optimization and Tuning
      3. Formatting the Namenode
      4. Creating a /tmp Directory
    6. Namenode High Availability
      1. Fencing Options
      2. Basic Configuration
      3. Automatic Failover Configuration
      4. Format and Bootstrap the Namenodes
    7. Namenode Federation
    8. MapReduce
      1. Identification and Location
      2. Optimization and Tuning
    9. Rack Topology
    10. Security
  9. 6. Identity, Authentication, and Authorization
    1. Identity
    2. Kerberos and Hadoop
      1. Kerberos: A Refresher
      2. Kerberos Support in Hadoop
    3. Authorization
      1. HDFS
      2. MapReduce
      3. Other Tools and Systems
    4. Tying It Together
  10. 7. Resource Management
    1. What Is Resource Management?
    2. HDFS Quotas
    3. MapReduce Schedulers
      1. The FIFO Scheduler
      2. The Fair Scheduler
      3. The Capacity Scheduler
      4. The Future
  11. 8. Cluster Maintenance
    1. Managing Hadoop Processes
      1. Starting and Stopping Processes with Init Scripts
      2. Starting and Stopping Processes Manually
    2. HDFS Maintenance Tasks
      1. Adding a Datanode
      2. Decommissioning a Datanode
      3. Checking Filesystem Integrity with fsck
      4. Balancing HDFS Block Data
      5. Dealing with a Failed Disk
    3. MapReduce Maintenance Tasks
      1. Adding a Tasktracker
      2. Decommissioning a Tasktracker
      3. Killing a MapReduce Job
      4. Killing a MapReduce Task
      5. Dealing with a Blacklisted Tasktracker
  12. 9. Troubleshooting
    1. Differential Diagnosis Applied to Systems
    2. Common Failures and Problems
      1. Humans (You)
      2. Misconfiguration
      3. Hardware Failure
      4. Resource Exhaustion
      5. Host Identification and Naming
      6. Network Partitions
    3. “Is the Computer Plugged In?”
      1. E-SPORE
    4. Treatment and Care
    5. War Stories
      1. A Mystery Bottleneck
      2. There’s No Place Like
  13. 10. Monitoring
    1. An Overview
    2. Hadoop Metrics
      1. Apache Hadoop 0.20.0 and CDH3 (metrics1)
      2. Apache Hadoop 0.20.203 and Later, and CDH4 (metrics2)
      3. What about SNMP?
    3. Health Monitoring
      1. Host-Level Checks
      2. All Hadoop Processes
      3. HDFS Checks
      4. MapReduce Checks
  14. 11. Backup and Recovery
    1. Data Backup
      1. Distributed Copy (distcp)
      2. Parallel Data Ingestion
    2. Namenode Metadata
  15. A. Deprecated Configuration Properties
  16. Index
  17. About the Author
  18. Colophon
  19. Copyright

Chapter 11. Backup and Recovery

Data Backup

After accumulating a few petabytes of data or so, someone inevitably asks how all this data is going to be backed up. It’s a deceptively difficult problem to overcome when working with such a large repository of data. Overcoming even simple problems like knowing what has changed since the last backup can prove difficult with a high rate of new data arrival in a sufficiently large cluster. All backup solutions need to deal explicitly with a few key concerns. Selecting the data that should be backed up is a two-dimensional problem, in that both the critical datasets must be choosen, as must, within each dataset, the subset of the data that has not yet been backed up. The timeliness of backups is another important question. Data can be backed up less frequently, in larger batches, but this affects the window of possible data loss. Ratcheting up the frequency of a backup may not be feasible due to the incurred overhead. Finally, one of the most difficult problems that must be tackled is that of backup consistency. Copying the data as it’s changing can potentially result in an invalid backup. For this reason, some knowledge of how the application functions with respect to the underlying filesystem is necessary. Those with experience administering relational databases are intimately aware of the problems with simply copying data out from under a running system.

The act of taking a backup implies the execution of a batch operation that (usually) ...

The best content for your career. Discover unlimited learning on demand for around $1/day.