You are previewing An Introduction to Parallel Programming.
O'Reilly logo
An Introduction to Parallel Programming

Book Description

Author Peter Pacheco uses a tutorial approach to show students how to develop effective parallel programs with MPI, Pthreads, and OpenMP. The first undergraduate text to directly address compiling and running parallel programs on the new multi-core and cluster architecture, An Introduction to Parallel Programming explains how to design, debug, and evaluate the performance of distributed and shared-memory programs. User-friendly exercises teach students how to compile, run and modify example programs.



Key features:
  • Takes a tutorial approach, starting with small programming examples and building progressively to more challenging examples
  • Focuses on designing, debugging and evaluating the performance of distributed and shared-memory programs
  • Explains how to develop parallel programs using MPI, Pthreads, and OpenMP programming models
  • Table of Contents

    1. Cover Image
    2. Table of Contents
    3. In Praise of An Introduction to Parallel Programming
    4. Recommended Reading List
    5. Front Matter
    6. Copyright
    7. Dedication
    8. Preface
    9. Acknowledgments
    10. About the Author
    11. Chapter 1. Why Parallel Computing?
    12. 1.1. Why We Need Ever-Increasing Performance
    13. 1.2. Why We're Building Parallel Systems
    14. 1.3. Why We Need to Write Parallel Programs
    15. 1.4. How Do We Write Parallel Programs?
    16. 1.5. What We'll Be Doing
    17. 1.6. Concurrent, Parallel, Distributed
    18. 1.7. The Rest of the Book
    19. 1.8. A Word of Warning
    20. 1.9. Typographical Conventions
    21. 1.10. Summary
    22. 1.11. Exercises
    23. Chapter 2. Parallel Hardware and Parallel Software
    24. 2.1. Some Background
    25. 2.2. Modifications to the von Neumann Model
    26. 2.3. Parallel Hardware
    27. 2.4. Parallel Software
    28. 2.5. Input and Output
    29. 2.6. Performance
    30. 2.7. Parallel Program Design
    31. 2.8. Writing and Running Parallel Programs
    32. 2.9. Assumptions
    33. 2.10. Summary
    34. 2.11. Exercises
    35. Chapter 3. Distributed-Memory Programming with MPI
    36. 3.1. Getting Started
    37. 3.2. The Trapezoidal Rule in MPI
    38. 3.3. Dealing with I/O
    39. 3.4. Collective Communication
    40. 3.5. MPI Derived Datatypes
    41. 3.6. Performance Evaluation of MPI Programs
    42. 3.7. A Parallel Sorting Algorithm
    43. 3.8. Summary
    44. 3.9. Exercises
    45. 3.10. Programming Assignments
    46. Chapter 4. Shared-Memory Programming with Pthreads
    47. 4.1. Processes, Threads, and Pthreads
    48. 4.2. Hello, World
    49. 4.3. Matrix-Vector Multiplication
    50. 4.4. Critical Sections
    51. 4.5. Busy-Waiting
    52. 4.6. Mutexes
    53. 4.7. Producer-Consumer Synchronization and Semaphores
    54. 4.8. Barriers and Condition Variables
    55. 4.9. Read-Write Locks
    56. 4.10. Caches, Cache Coherence, and False Sharing
    57. 4.11. Thread-Safety
    58. 4.12. Summary
    59. 4.13. Exercises
    60. 4.14. Programming Assignments
    61. Chapter 5. Shared-Memory Programming with OpenMP
    62. 5.1. Getting Started
    63. 5.2. The Trapezoidal Rule
    64. 5.3. Scope of Variables
    65. 5.4. The Reduction Clause
    66. 5.5. The parallel for Directive
    67. 5.6. More About Loops in OpenMP: Sorting
    68. 5.7. Scheduling Loops
    69. 5.8. Producers and Consumers
    70. 5.9. Caches, Cache Coherence, and False Sharing
    71. 5.10. Thread-Safety
    72. 5.11. Summary
    73. 5.12. Exercises
    74. 5.13. Programming Assignments
    75. Chapter 6. Parallel Program Development
    76. 6.1. Two n-Body Solvers
    77. 6.2. Tree Search
    78. 6.3. A Word of Caution
    79. 6.4. Which API?
    80. 6.5. Summary
    81. 6.6. Exercises
    82. 6.7. Programming Assignments
    83. Chapter 7. Where to Go from Here
    84. References
    85. Index