You are previewing Structured Parallel Programming.
O'Reilly logo
Structured Parallel Programming

Book Description

Programming is now parallel programming. Much as structured programming revolutionized traditional serial programming decades ago, a new kind of structured programming, based on patterns, is relevant to parallel programming today. Parallel computing experts and industry insiders Michael McCool, Arch Robison, and James Reinders describe how to design and implement maintainable and efficient parallel algorithms using a pattern-based approach. They present both theory and practice, and give detailed concrete examples using multiple programming models. Examples are primarily given using two of the most popular and cutting edge programming models for parallel programming: Threading Building Blocks, and Cilk Plus. These architecture-independent models enable easy integration into existing applications, preserve investments in existing code, and speed the development of parallel applications. Examples from realistic contexts illustrate patterns and themes in parallel algorithm design that are widely applicable regardless of implementation technology.



  • The patterns-based approach offers structure and insight that developers can apply to a variety of parallel programming models
  • Develops a composable, structured, scalable, and machine-independent approach to parallel computing
  • Includes detailed examples in both Cilk Plus and the latest Threading Building Blocks, which support a wide variety of computers

Table of Contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Listings
  6. Preface
  7. Preliminaries
  8. Chapter 1. Introduction
    1. 1.1 Think Parallel
    2. 1.2 Performance
    3. 1.3 Motivation: Pervasive Parallelism
    4. 1.4 Structured Pattern-Based Programming
    5. 1.5 Parallel Programming Models
    6. 1.6 Organization of this Book
    7. 1.7 Summary
  9. Chapter 2. Background
    1. 2.1 Vocabulary and Notation
    2. 2.2 Strategies
    3. 2.3 Mechanisms
    4. 2.4 Machine Models
    5. 2.5 Performance Theory
    6. 2.6 Pitfalls
    7. 2.7 Summary
  10. PART I. Patterns
    1. Chapter 3. Patterns
      1. 3.1 Nesting Pattern
      2. 3.2 Structured Serial Control Flow Patterns
      3. 3.3 Parallel Control Patterns
      4. 3.4 Serial Data Management Patterns
      5. 3.5 Parallel Data Management Patterns
      6. 3.6 Other Parallel Patterns
      7. 3.7 Non-Deterministic Patterns
      8. 3.8 Programming Model Support for Patterns
      9. 3.9 Summary
    2. Chapter 4. Map
      1. 4.1 Map
      2. 4.2 Scaled Vector Addition (SAXPY)
      3. 4.3 Mandelbrot
      4. 4.4 Sequence of Maps versus Map of Sequence
      5. 4.5 Comparison of Parallel Models
      6. 4.6 Related Patterns
      7. 4.7 Summary
    3. Chapter 5. Collectives
      1. 5.1 Reduce
      2. 5.2 Fusing Map and Reduce
      3. 5.3 Dot Product
      4. 5.4 Scan
      5. 5.5 Fusing Map and Scan
      6. 5.6 Integration
      7. 5.7 Summary
    4. Chapter 6. Data Reorganization
      1. 6.1 Gather
      2. 6.2 Scatter
      3. 6.3 Converting Scatter to Gather
      4. 6.4 Pack
      5. 6.5 Fusing Map and Pack
      6. 6.6 Geometric Decomposition and Partition
      7. 6.7 Array of Structures vs. Structures of Arrays
      8. 6.8 Summary
    5. Chapter 7. Stencil and Recurrence
      1. 7.1 Stencil
      2. 7.2 Implementing Stencil with Shift
      3. 7.3 Tiling Stencils for Cache
      4. 7.4 Optimizing Stencils for Communication
      5. 7.5 Recurrence
      6. 7.6 Summary
    6. Chapter 8. Fork–Join
      1. 8.1 Definition
      2. 8.2 Programming Model Support for Fork–Join
      3. 8.3 Recursive Implementation of Map
      4. 8.4 Choosing Base Cases
      5. 8.5 Load Balancing
      6. 8.6 Complexity of Parallel Divide-and-Conquer
      7. 8.7 Karatsuba Multiplication of Polynomials
      8. 8.8 Cache Locality and Cache-Oblivious Algorithms
      9. 8.9 Quicksort
      10. 8.10 Reductions and Hyperobjects
      11. 8.11 Implementing Scan with Fork–Join
      12. 8.12 Applying Fork–Join to Recurrences
      13. 8.13 Summary
    7. Chapter 9. Pipeline
      1. 9.1 Basic Pipeline
      2. 9.2 Pipeline with Parallel Stages
      3. 9.3 Implementation of a Pipeline
      4. 9.4 Programming Model Support for Pipelines
      5. 9.5 More General Topologies
      6. 9.6 Mandatory versus Optional Parallelism
      7. 9.7 Summary
  11. PART II. Examples
    1. Chapter 10. Forward Seismic Simulation
      1. 10.1 Background
      2. 10.2 Stencil Computation
      3. 10.3 Impact of Caches on Arithmetic Intensity
      4. 10.4 Raising Arithmetic Intensity with Space–Time Tiling
      5. 10.5 Cilk Plus Code
      6. 10.6 ArBB Implementation
      7. 10.7 Summary
    2. Chapter 11. K-Means Clustering
      1. 11.1 Algorithm
      2. 11.2 K-Means with Cilk Plus
      3. 11.3 K-Means with TBB
      4. 11.4 Summary
    3. Chapter 12. Bzip2 Data Compression
      1. 12.1 The Bzip2 Algorithm
      2. 12.2 Three-Stage Pipeline Using TBB
      3. 12.3 Four-Stage Pipeline Using TBB
      4. 12.4 Three-Stage Pipeline Using Cilk Plus
      5. 12.5 Summary
    4. Chapter 13. Merge Sort
      1. 13.1 Parallel Merge
      2. 13.2 Parallel Merge Sort
      3. 13.3 Summary
    5. Chapter 14. Sample Sort
      1. 14.1 Overall Structure
      2. 14.2 Choosing the Number of Bins
      3. 14.3 Binning
      4. 14.4 Repacking and Subsorting
      5. 14.5 Performance Analysis of Sample Sort
      6. 14.6 For C++ Experts
      7. 14.7 Summary
    6. Chapter 15. Cholesky Factorization
      1. 15.1 Fortran Rules!
      2. 15.2 Recursive Cholesky Decomposition
      3. 15.3 Triangular Solve
      4. 15.4 Symmetric Rank Update
      5. 15.5 Where is the Time Spent?
      6. 15.6 Summary
  12. APPENDIX A. Further Reading
  13. APPENDIX B. Cilk Plus
  14. APPENDIX C. TBB
  15. APPENDIX D. C++11
  16. APPENDIX E. Glossary
  17. Bibliography
  18. Index