You are previewing Hadoop Application Architectures.
O'Reilly logo
Hadoop Application Architectures

Book Description

Get expert guidance on architecting end-to-end data management solutions with Apache Hadoop. While many sources explain how to use various components in the Hadoop ecosystem, this practical book takes you through architectural considerations necessary to tie those components together into a complete tailored application, based on your particular use case.

Table of Contents

  1. Foreword
  2. Preface
    1. A Note About the Code Examples
    2. Who Should Read This Book
    3. Why We Wrote This Book
    4. Navigating This Book
    5. Conventions Used in This Book
    6. Using Code Examples
    7. SafariĀ® Books Online
    8. How to Contact Us
    9. Acknowledgments
  3. I. Architectural Considerations for Hadoop Applications
  4. 1. Data Modeling in Hadoop
    1. Data Storage Options
      1. Standard File Formats
      2. Hadoop File Types
      3. Serialization Formats
      4. Columnar Formats
      5. Compression
    2. HDFS Schema Design
      1. Location of HDFS Files
      2. Advanced HDFS Schema Design
      3. HDFS Schema Design Summary
    3. HBase Schema Design
      1. Row Key
      2. Timestamp
      3. Hops
      4. Tables and Regions
      5. Using Columns
      6. Using Column Families
      7. Time-to-Live
    4. Managing Metadata
      1. What Is Metadata?
      2. Why Care About Metadata?
      3. Where to Store Metadata?
      4. Examples of Managing Metadata
      5. Limitations of the Hive Metastore and HCatalog
      6. Other Ways of Storing Metadata
    5. Conclusion
  5. 2. Data Movement
    1. Data Ingestion Considerations
      1. Timeliness of Data Ingestion
      2. Incremental Updates
      3. Access Patterns
      4. Original Source System and Data Structure
      5. Transformations
      6. Network Bottlenecks
      7. Network Security
      8. Push or Pull
      9. Failure Handling
      10. Level of Complexity
    2. Data Ingestion Options
      1. File Transfers
      2. Considerations for File Transfers versus Other Ingest Methods
      3. Sqoop: Batch Transfer Between Hadoop and Relational Databases
      4. Flume: Event-Based Data Collection and Processing
      5. Kafka
    3. Data Extraction
    4. Conclusion
  6. 3. Processing Data in Hadoop
    1. MapReduce
      1. MapReduce Overview
      2. Example for MapReduce
      3. When to Use MapReduce
    2. Spark
      1. Spark Overview
      2. Overview of Spark Components
      3. Basic Spark Concepts
      4. Benefits of Using Spark
      5. Spark Example
      6. When to Use Spark
    3. Abstractions
      1. Pig
      2. Pig Example
      3. When to Use Pig
    4. Crunch
      1. Crunch Example
      2. When to Use Crunch
    5. Cascading
      1. Cascading Example
      2. When to Use Cascading
    6. Hive
      1. Hive Overview
      2. Example of Hive Code
      3. When to Use Hive
    7. Impala
      1. Impala Overview
      2. Speed-Oriented Design
      3. Impala Example
      4. When to Use Impala
    8. Conclusion
  7. 4. Common Hadoop Processing Patterns
    1. Pattern: Removing Duplicate Records by Primary Key
      1. Data Generation for Deduplication Example
      2. Code Example: Spark Deduplication in Scala
      3. Code Example: Deduplication in SQL
    2. Pattern: Windowing Analysis
      1. Data Generation for Windowing Analysis Example
      2. Code Example: Peaks and Valleys in Spark
      3. Code Example: Peaks and Valleys in SQL
    3. Pattern: Time Series Modifications
      1. Use HBase and Versioning
      2. Use HBase with a RowKey of RecordKey and StartTime
      3. Use HDFS and Rewrite the Whole Table
      4. Use Partitions on HDFS for Current and Historical Records
      5. Data Generation for Time Series Example
      6. Code Example: Time Series in Spark
      7. Code Example: Time Series in SQL
    4. Conclusion
  8. 5. Graph Processing on Hadoop
    1. What Is a Graph?
    2. What Is Graph Processing?
    3. How Do You Process a Graph in a Distributed System?
      1. The Bulk Synchronous Parallel Model
      2. BSP by Example
    4. Giraph
      1. Read and Partition the Data
      2. Batch Process the Graph with BSP
      3. Write the Graph Back to Disk
      4. Putting It All Together
      5. When Should You Use Giraph?
    5. GraphX
      1. Just Another RDD
      2. GraphX Pregel Interface
      3. vprog()
      4. sendMessage()
      5. mergeMessage()
    6. Which Tool to Use?
    7. Conclusion
  9. 6. Orchestration
    1. Why We Need Workflow Orchestration
    2. The Limits of Scripting
    3. The Enterprise Job Scheduler and Hadoop
    4. Orchestration Frameworks in the Hadoop Ecosystem
    5. Oozie Terminology
    6. Oozie Overview
    7. Oozie Workflow
    8. Workflow Patterns
      1. Point-to-Point Workflow
      2. Fan-Out Workflow
      3. Capture-and-Decide Workflow
    9. Parameterizing Workflows
    10. Classpath Definition
    11. Scheduling Patterns
      1. Frequency Scheduling
      2. Time and Data Triggers
    12. Executing Workflows
    13. Conclusion
  10. 7. Near-Real-Time Processing with Hadoop
    1. Stream Processing
    2. Apache Storm
      1. Storm High-Level Architecture
      2. Storm Topologies
      3. Tuples and Streams
      4. Spouts and Bolts
      5. Stream Groupings
      6. Reliability of Storm Applications
      7. Exactly-Once Processing
      8. Fault Tolerance
      9. Integrating Storm with HDFS
      10. Integrating Storm with HBase
      11. Storm Example: Simple Moving Average
      12. Evaluating Storm
    3. Trident
      1. Trident Example: Simple Moving Average
      2. Evaluating Trident
    4. Spark Streaming
      1. Overview of Spark Streaming
      2. Spark Streaming Example: Simple Count
      3. Spark Streaming Example: Multiple Inputs
      4. Spark Streaming Example: Maintaining State
      5. Spark Streaming Example: Windowing
      6. Spark Streaming Example: Streaming versus ETL Code
      7. Evaluating Spark Streaming
    5. Flume Interceptors
    6. Which Tool to Use?
      1. Low-Latency Enrichment, Validation, Alerting, and Ingestion
      2. NRT Counting, Rolling Averages, and Iterative Processing
      3. Complex Data Pipelines
    7. Conclusion
  11. II. Case Studies
  12. 8. Clickstream Analysis
    1. Defining the Use Case
    2. Using Hadoop for Clickstream Analysis
    3. Design Overview
    4. Storage
    5. Ingestion
      1. The Client Tier
      2. The Collector Tier
    6. Processing
      1. Data Deduplication
      2. Sessionization
    7. Analyzing
    8. Orchestration
    9. Conclusion
  13. 9. Fraud Detection
    1. Continuous Improvement
    2. Taking Action
    3. Architectural Requirements of Fraud Detection Systems
    4. Introducing Our Use Case
    5. High-Level Design
    6. Client Architecture
    7. Profile Storage and Retrieval
      1. Caching
      2. HBase Data Definition
      3. Delivering Transaction Status: Approved or Denied?
    8. Ingest
      1. Path Between the Client and Flume
    9. Near-Real-Time and Exploratory Analytics
    10. Near-Real-Time Processing
    11. Exploratory Analytics
    12. What About Other Architectures?
      1. Flume Interceptors
      2. Kafka to Storm or Spark Streaming
      3. External Business Rules Engine
    13. Conclusion
  14. 10. Data Warehouse
    1. Using Hadoop for Data Warehousing
    2. Defining the Use Case
    3. OLTP Schema
    4. Data Warehouse: Introduction and Terminology
    5. Data Warehousing with Hadoop
    6. High-Level Design
      1. Data Modeling and Storage
      2. Ingestion
      3. Data Processing and Access
      4. Aggregations
      5. Data Export
      6. Orchestration
    7. Conclusion
  15. A. Joins in Impala
    1. Broadcast Joins
    2. Partitioned Hash Join
  16. Index