You are previewing Topics in Parallel and Distributed Computing.
O'Reilly logo
Topics in Parallel and Distributed Computing

Book Description

Topics in Parallel and Distributed Computing provides resources and guidance for those learning PDC as well as those teaching students new to the discipline.

The pervasiveness of computing devices containing multicore CPUs and GPUs, including home and office PCs, laptops, and mobile devices, is making even common users dependent on parallel processing. Certainly, it is no longer sufficient for even basic programmers to acquire only the traditional sequential programming skills. The preceding trends point to the need for imparting a broad-based skill set in PDC technology.

However, the rapid changes in computing hardware platforms and devices, languages, supporting programming environments, and research advances, poses a challenge both for newcomers and seasoned computer scientists.

This edited collection has been developed over the past several years in conjunction with the IEEE technical committee on parallel processing (TCPP), which held several workshops and discussions on learning parallel computing and integrating parallel concepts into courses throughout computer science curricula.



  • Contributed and developed by the leading minds in parallel computing research and instruction
  • Provides resources and guidance for those learning PDC as well as those teaching students new to the discipline
  • Succinctly addresses a range of parallel and distributed computing topics
  • Pedagogically designed to ensure understanding by experienced engineers and newcomers
  • Developed over the past several years in conjunction with the IEEE technical committee on parallel processing (TCPP), which held several workshops and discussions on learning parallel computing and integrating parallel concepts

Table of Contents

  1. Cover image
  2. Title page
  3. Table of Contents
  4. Copyright
  5. Contributors
  6. Editor and author biographical sketches
    1. Editors
    2. Authors
  7. Symbol or phrase
  8. Chapter 1: Editors’ introduction and road map
    1. Abstract
    2. 1.1 Why this book?
    3. 1.2 Chapter introductions
    4. 1.3 How to find a topic or material for a course
    5. 1.4 Invitation to write for volume 2
  9. Part 1: For Instructors
    1. Chapter 2: Hands-on parallelism with no prerequisites and little time using Scratch
      1. Abstract
      2. 2.1 Contexts for application
      3. 2.2 Introduction to scratch
      4. 2.3 Parallel computing and scratch
      5. 2.4 Conclusion
    2. Chapter 3: Parallelism in Python for novices
      1. Abstract
      2. 3.1 Introduction
      3. 3.2 Background
      4. 3.3 Student prerequisites
      5. 3.4 General approach: parallelism as a medium
      6. 3.5 Course materials
      7. 3.6 Processes
      8. 3.7 Communication
      9. 3.8 Speedup
      10. 3.9 Further examples using the <span xmlns="http://www.w3.org/1999/xhtml" xmlns:epub="http://www.idpf.org/2007/ops" class="inlinecode">Pool/map</span> paradigm paradigm
      11. 3.10 Conclusion
    3. Chapter 4: Modules for introducing threads
      1. Abstract
      2. 4.1 Introduction
      3. 4.2 Prime counting
      4. 4.3 Mandelbrot
    4. Chapter 5: Introducing parallel and distributed computing concepts in digital logic
      1. Abstract
      2. 5.1 Number representation
      3. 5.2 Logic gates
      4. 5.3 Combinational logic synthesis and analysis
      5. 5.4 Combinational building blocks
      6. 5.5 Counters and registers
      7. 5.6 Other digital logic topics
    5. Chapter 6: Networks and MPI for cluster computing
      1. Abstract
      2. 6.1 Why message passing/MPI?
      3. 6.2 The message passing concept
      4. 6.3 High-performance networks
      5. 6.4 Advanced concepts
  10. Part 2: For Students
    1. Chapter 7: Fork-join parallelism with a data-structures focus
      1. Abstract
      2. Acknowledgments
      3. 7.1 Meta-introduction: an instructor’s view of this material
      4. 7.2 Introduction
      5. 7.3 Basic fork-join parallelism
      6. 7.4 Analyzing fork-join algorithms
      7. 7.5 Fancier fork-join algorithms: prefix, pack, sort
    2. Chapter 8: Shared-memory concurrency control with a data-structures focus
      1. Abstract
      2. 8.1 Introduction
      3. 8.2 The programming model
      4. 8.3 Synchronization with locks
      5. 8.4 Race conditions: bad interleavings and data races
      6. 8.5 Concurrency programming guidelines
      7. 8.6 Deadlock
      8. 8.7 Additional synchronization primitives
      9. Acknowledgments
    3. Chapter 9: Parallel computing in a Python-based computer science course
      1. Abstract
      2. 9.1 Parallel programming
      3. 9.2 Parallel reduction
      4. 9.3 Parallel scanning
      5. 9.4 Copy-scans
      6. 9.5 Partitioning in parallel
      7. 9.6 Parallel quicksort
      8. 9.7 How to perform segmented scans and reductions
      9. 9.8 Comparing sequential and parallel running times
    4. Chapter 10: Parallel programming illustrated through Conway’s Game of Life
      1. Abstract
      2. 10.1 Introduction
      3. 10.2 Parallel variants
      4. 10.3 Advanced topics
      5. 10.4 Summary
  11. Appendix A: Chapters and topics
  12. Index