You are previewing Hadoop Operations.

Hadoop Operations

Cover of Hadoop Operations by Eric Sammer Published by O'Reilly Media, Inc.
  1. Hadoop Operations
  2. Dedication
  3. Preface
    1. Conventions Used in This Book
    2. Using Code Examples
    3. Safari® Books Online
    4. How to Contact Us
    5. Acknowledgments
  4. 1. Introduction
  5. 2. HDFS
    1. Goals and Motivation
    2. Design
    3. Daemons
    4. Reading and Writing Data
      1. The Read Path
      2. The Write Path
    5. Managing Filesystem Metadata
    6. Namenode High Availability
    7. Namenode Federation
    8. Access and Integration
      1. Command-Line Tools
      2. FUSE
      3. REST Support
  6. 3. MapReduce
    1. The Stages of MapReduce
    2. Introducing Hadoop MapReduce
      1. Daemons
      2. When It All Goes Wrong
    3. YARN
  7. 4. Planning a Hadoop Cluster
    1. Picking a Distribution and Version of Hadoop
      1. Apache Hadoop
      2. Cloudera’s Distribution Including Apache Hadoop
      3. Versions and Features
      4. What Should I Use?
    2. Hardware Selection
      1. Master Hardware Selection
      2. Worker Hardware Selection
      3. Cluster Sizing
      4. Blades, SANs, and Virtualization
    3. Operating System Selection and Preparation
      1. Deployment Layout
      2. Software
      3. Hostnames, DNS, and Identification
      4. Users, Groups, and Privileges
    4. Kernel Tuning
      1. vm.swappiness
      2. vm.overcommit_memory
    5. Disk Configuration
      1. Choosing a Filesystem
      2. Mount Options
    6. Network Design
      1. Network Usage in Hadoop: A Review
      2. 1 Gb versus 10 Gb Networks
      3. Typical Network Topologies
  8. 5. Installation and Configuration
    1. Installing Hadoop
      1. Apache Hadoop
      2. CDH
    2. Configuration: An Overview
      1. The Hadoop XML Configuration Files
    3. Environment Variables and Shell Scripts
    4. Logging Configuration
    5. HDFS
      1. Identification and Location
      2. Optimization and Tuning
      3. Formatting the Namenode
      4. Creating a /tmp Directory
    6. Namenode High Availability
      1. Fencing Options
      2. Basic Configuration
      3. Automatic Failover Configuration
      4. Format and Bootstrap the Namenodes
    7. Namenode Federation
    8. MapReduce
      1. Identification and Location
      2. Optimization and Tuning
    9. Rack Topology
    10. Security
  9. 6. Identity, Authentication, and Authorization
    1. Identity
    2. Kerberos and Hadoop
      1. Kerberos: A Refresher
      2. Kerberos Support in Hadoop
    3. Authorization
      1. HDFS
      2. MapReduce
      3. Other Tools and Systems
    4. Tying It Together
  10. 7. Resource Management
    1. What Is Resource Management?
    2. HDFS Quotas
    3. MapReduce Schedulers
      1. The FIFO Scheduler
      2. The Fair Scheduler
      3. The Capacity Scheduler
      4. The Future
  11. 8. Cluster Maintenance
    1. Managing Hadoop Processes
      1. Starting and Stopping Processes with Init Scripts
      2. Starting and Stopping Processes Manually
    2. HDFS Maintenance Tasks
      1. Adding a Datanode
      2. Decommissioning a Datanode
      3. Checking Filesystem Integrity with fsck
      4. Balancing HDFS Block Data
      5. Dealing with a Failed Disk
    3. MapReduce Maintenance Tasks
      1. Adding a Tasktracker
      2. Decommissioning a Tasktracker
      3. Killing a MapReduce Job
      4. Killing a MapReduce Task
      5. Dealing with a Blacklisted Tasktracker
  12. 9. Troubleshooting
    1. Differential Diagnosis Applied to Systems
    2. Common Failures and Problems
      1. Humans (You)
      2. Misconfiguration
      3. Hardware Failure
      4. Resource Exhaustion
      5. Host Identification and Naming
      6. Network Partitions
    3. “Is the Computer Plugged In?”
      1. E-SPORE
    4. Treatment and Care
    5. War Stories
      1. A Mystery Bottleneck
      2. There’s No Place Like 127.0.0.1
  13. 10. Monitoring
    1. An Overview
    2. Hadoop Metrics
      1. Apache Hadoop 0.20.0 and CDH3 (metrics1)
      2. Apache Hadoop 0.20.203 and Later, and CDH4 (metrics2)
      3. What about SNMP?
    3. Health Monitoring
      1. Host-Level Checks
      2. All Hadoop Processes
      3. HDFS Checks
      4. MapReduce Checks
  14. 11. Backup and Recovery
    1. Data Backup
      1. Distributed Copy (distcp)
      2. Parallel Data Ingestion
    2. Namenode Metadata
  15. A. Deprecated Configuration Properties
  16. Index
  17. About the Author
  18. Colophon
  19. Copyright
O'Reilly logo

Chapter 2. HDFS

Goals and Motivation

The first half of Apache Hadoop is a filesystem called the Hadoop Distributed Filesystem or simply HDFS. HDFS was built to support high throughput, streaming reads and writes of extremely large files. Traditional large storage area networks (SANs) and network attached storage (NAS) offer centralized, low-latency access to either a block device or a filesystem on the order of terabytes in size. These systems are fantastic as the backing store for relational databases, content delivery systems, and similar types of data storage needs because they can support full-featured POSIX semantics, scale to meet the size requirements of these systems, and offer low-latency access to data. Imagine for a second, though, hundreds or thousands of machines all waking up at the same time and pulling hundreds of terabytes of data from a centralized storage system at once. This is where traditional storage doesn’t necessarily scale.

By creating a system composed of independent machines, each with its own I/O subsystem, disks, RAM, network interfaces, and CPUs, and relaxing (and sometimes removing) some of the POSIX requirements, it is possible to build a system optimized, in both performance and cost, for the specific type of workload we’re interested in. There are a number of specific goals for HDFS:

  • Store millions of large files, each greater than tens of gigabytes, and filesystem sizes reaching tens of petabytes.

  • Use a scale-out model based on inexpensive commodity ...

The best content for your career. Discover unlimited learning on demand for around $1/day.