You are previewing Professional Multicore Programming: Design and Implementation for C++ Developers.
O'Reilly logo
Professional Multicore Programming: Design and Implementation for C++ Developers

Book Description

Professional Multicore Programming: Design and Implementation for C++ Developers presents the basics of multicore programming in a simple, easy-to-understand manner so that you can easily apply the concepts to your everyday projects. Learn the fundamentals of programming for multiprocessor and multithreaded architecture, progress to multi-core programming and eventually become comfortable with programming techniques that otherwise can be difficult to understand. Anticipate the pitfalls and traps of concurrency programming and synchronization before you encounter them yourself by finding them outlined in this indispensable guide to multicore programming.

Table of Contents

  1. Copyright
  2. About the Authors
  3. Credits
  4. Acknowledgments
  5. Introduction
    1. Learn Multicore Programming
    2. Different Points of View
    3. Multiparadigm Approaches are the Solution
    4. Why C++?
    5. UML Diagrams
    6. Development Environments Supported
    7. Program Profiles
    8. Testing and Code Reliability
    9. Conventions
    10. Source Code
    11. Errata
    12. p2p.wrox.com
  6. 1. The New Architecture
    1. 1.1. What Is a Multicore?
    2. 1.2. Multicore Architectures
      1. 1.2.1. Hybrid Multicore Architectures
    3. 1.3. The Software Developer's Viewpoint
      1. 1.3.1. The Basic Processor Architecture
      2. 1.3.2. The CPU (Instruction Set)
      3. 1.3.3. Memory Is the Key
      4. 1.3.4. Registers
      5. 1.3.5. Cache
        1. 1.3.5.1. Level 1 Cache
        2. 1.3.5.2. Level 2 Cache
        3. 1.3.5.3. Compiler Switches for Cache?
      6. 1.3.6. Main Memory
    4. 1.4. The Bus Connection
    5. 1.5. From Single Core to Multicore
      1. 1.5.1. Multiprogramming and Multiprocessing
      2. 1.5.2. Parallel Programming
      3. 1.5.3. Multicore Application Design and Implementation
    6. 1.6. Summary
  7. 2. Four Effective Multicore Designs
    1. 2.1. The AMD Multicore Opteron
      1. 2.1.1. Opteron's Direct Connect and HyperTransport
        1. 2.1.1.1. The Direct Connect Architecture
        2. 2.1.1.2. HyperTransport Technology
      2. 2.1.2. System Request Interface and Crossbar
      3. 2.1.3. The Opteron Is NUMA
      4. 2.1.4. Cache and the Multiprocessor Opteron
    2. 2.2. The Sun UltraSparc T1 Multiprocessor
      1. 2.2.1. Program Profile 2-1
        1. 2.2.1.1. Program Name:
        2. 2.2.1.2. Description:
        3. 2.2.1.3. Libraries Required:
        4. 2.2.1.4. Headers Required:
        5. 2.2.1.5. Compile and Link Instructions:
        6. 2.2.1.6. Test Environment:
        7. 2.2.1.7. Hardware:
        8. 2.2.1.8. Execution Instructions:
        9. 2.2.1.9. Notes:
      2. 2.2.2. UltraSparc T1 Cores
      3. 2.2.3. Cross Talk and The Crossbar
      4. 2.2.4. DDRAM Controller and L2 Cache
      5. 2.2.5. UltraSparc T1 and the Sun and GNU gcc Compilers
    3. 2.3. The IBM Cell Broadband Engine
      1. 2.3.1. CBE and Linux
      2. 2.3.2. CBE Memory Models
      3. 2.3.3. Hidden from the Operating System
      4. 2.3.4. Synergistic Processor Unit
    4. 2.4. Intel Core 2 Duo Processor
      1. 2.4.1. Northbridge and Southbridge
      2. 2.4.2. Intel's PCI Express
      3. 2.4.3. Core 2 Duo's Instruction Set
    5. 2.5. Summary
  8. 3. The Challenges of Multicore Programming
    1. 3.1. What Is the Sequential Model?
    2. 3.2. What Is Concurrency?
    3. 3.3. Software Development
      1. 3.3.1. Challenge #1: Software Decomposition
        1. 3.3.1.1. An Example of Decomposition
          1. 3.3.1.1.1. Decomposition #1
          2. 3.3.1.1.2. Decomposition #2
        2. 3.3.1.2. Finding the Right Model
          1. 3.3.1.2.1. What Is a Model?
          2. 3.3.1.2.2. Procedural Models or Declarative Models?
          3. 3.3.1.2.3. One Room at a Time or All at Once?
      2. 3.3.2. Challenge #2: Task-to-Task Communication
        1. 3.3.2.1. Managing IPC Mechanisms
        2. 3.3.2.2. How Will the Painters Communicate?
      3. 3.3.3. Challenge #3: Concurrent Access to Data or Resources by Multiple Tasks or Agents
        1. 3.3.3.1. Problem #1: Data Race
        2. 3.3.3.2. Problem #2: Deadlock
        3. 3.3.3.3. Problem #3: Indefinite Postponement
      4. 3.3.4. Challenge #4: Identifying the Relationships between Concurrently Executing Tasks
        1. 3.3.4.1. The Basic Synchronization Relationships
        2. 3.3.4.2. Timing Considerations
      5. 3.3.5. Challenge #5: Controlling Resource Contention Between Tasks
      6. 3.3.6. Challenge #6: How Many Processes or Threads Are Enough?
      7. 3.3.7. Challenges #7 and #8: Finding Reliable and Reproducible Debugging and Testing
        1. 3.3.7.1. Finding the Right Debugger and Profiler
      8. 3.3.8. Challenge #9: Communicating a Design That Has Multiprocessing Components
      9. 3.3.9. Challenge #10: Implementing Multiprocessing and Multithreading in C++
    4. 3.4. C++ Developers Have to Learn New Libraries
    5. 3.5. Processor Architecture Challenges
    6. 3.6. Summary
  9. 4. The Operating System's Role
    1. 4.1. What Part Does the Operating System Play?
      1. 4.1.1. Providing a Consistent Interface
      2. 4.1.2. Managing Hardware Resources and Other Software Applications
      3. 4.1.3. The Developer's Interaction with the Operating System
      4. 4.1.4. Core Operating System Services
        1. 4.1.4.1. How Do You Get from Tasks to Processes?
        2. 4.1.4.2. Using the Thread Approach
        3. 4.1.4.3. How Do You Get from Tasks to Threads?
      5. 4.1.5. The Application Programmer's Interface
        1. 4.1.5.1. What Is POSIX and Why Use It?
        2. 4.1.5.2. Process Management
        3. 4.1.5.3. Process Management Example: The Game Scenario
        4. 4.1.5.4. Program Profile 4-1
          1. 4.1.5.4.1. Program Name:
          2. 4.1.5.4.2. Description:
          3. 4.1.5.4.3. Libraries Required:
          4. 4.1.5.4.4. User-Defined Headers Required:
          5. 4.1.5.4.5. Compile and Link Instructions:
          6. 4.1.5.4.6. Test Environment:
          7. 4.1.5.4.7. Processors:
          8. 4.1.5.4.8. Notes:
        5. 4.1.5.5. Program Profile 4-2
          1. 4.1.5.5.1. Program Name:
          2. 4.1.5.5.2. Description:
          3. 4.1.5.5.3. Libraries Required:
          4. 4.1.5.5.4. User-Defined Headers Required:
          5. 4.1.5.5.5. Compile and Link Instructions:
          6. 4.1.5.5.6. Test Environment:
          7. 4.1.5.5.7. Processors:
          8. 4.1.5.5.8. Notes:
    2. 4.2. Decomposition and the Operating System's Role
    3. 4.3. Hiding the Operating System's Role
      1. 4.3.1. Taking Advantage of C++ Power of Abstraction and Encapsulation
      2. 4.3.2. Interface Classes for the POSIX APIs
        1. 4.3.2.1. Program Profile 4-3
          1. 4.3.2.1.1. Program Name:
          2. 4.3.2.1.2. Description:
          3. 4.3.2.1.3. Libraries Required:
          4. 4.3.2.1.4. Additional Source Files Needed:
          5. 4.3.2.1.5. User-Defined Headers Required:
          6. 4.3.2.1.6. Compile and Link Instructions:
          7. 4.3.2.1.7. Test Environment:
          8. 4.3.2.1.8. Processors:
          9. 4.3.2.1.9. Notes:
        2. 4.3.2.2. Program Profile 4-4
          1. 4.3.2.2.1. Program Name:
          2. 4.3.2.2.2. Description:
          3. 4.3.2.2.3. Libraries Required:
          4. 4.3.2.2.4. Additional Source Files Needed:
          5. 4.3.2.2.5. User-Defined Headers Required:
          6. 4.3.2.2.6. Compile Instructions:
          7. 4.3.2.2.7. Test Environment:
          8. 4.3.2.2.8. Processors:
          9. 4.3.2.2.9. Notes:
    4. 4.4. Summary
  10. 5. Processes, C++ Interface Classes, and Predicates
    1. 5.1. We Say Multicore, We Mean Multiprocessor
    2. 5.2. What Is a Process?
    3. 5.3. Why Processes and Not Threads?
    4. 5.4. Using posix_spawn()
      1. 5.4.1. The file_actions Parameter
      2. 5.4.2. The attrp Parameter
      3. 5.4.3. A Simple posix_spawn() Example
      4. 5.4.4. The guess_it Program Using posix_spawn
    5. 5.5. Who Is the Parent? Who Is the Child?
    6. 5.6. Processes: A Closer Look
      1. 5.6.1. Process Control Block
      2. 5.6.2. Anatomy of a Process
      3. 5.6.3. Process States
      4. 5.6.4. How Are Processes Scheduled?
    7. 5.7. Monitoring Processes with the ps Utility
    8. 5.8. Setting and Getting Process Priorities
    9. 5.9. What Is a Context Switch?
    10. 5.10. The Activities in Process Creation
      1. 5.10.1. Using the fork() Function Call
      2. 5.10.2. Using the exec() Family of System Calls
        1. 5.10.2.1. The execl() Functions
        2. 5.10.2.2. The execv() Functions
        3. 5.10.2.3. Determining the Restrictions of exec() Functions
    11. 5.11. Working with Process Environment Variables
    12. 5.12. Using system() to Spawn Processes
    13. 5.13. Killing a Process
      1. 5.13.1. The exit(), and abort() Calls
      2. 5.13.2. The kill() Function
    14. 5.14. Process Resources
      1. 5.14.1. Types of Resources
      2. 5.14.2. POSIX Functions to Set Resource Limits
    15. 5.15. What Are Asynchronous and Synchronous Processes
      1. 5.15.1. Synchronous vs. Asynchronous Processes for fork(), posix_spawn(), system(), and exec()
    16. 5.16. The wait() Function Call
    17. 5.17. Predicates, Processes, and Interface Classes
      1. 5.17.1. Program Profile 5-1
        1. 5.17.1.1. Program Name:
        2. 5.17.1.2. Description:
        3. 5.17.1.3. Libraries Required:
        4. 5.17.1.4. Additional Source Files Needed:
        5. 5.17.1.5. User-Defined Headers Required:
        6. 5.17.1.6. Compile and Link Instructions:
        7. 5.17.1.7. Test Environment:
        8. 5.17.1.8. Processors:
        9. 5.17.1.9. Notes:
    18. 5.18. Summary
  11. 6. Multithreading
    1. 6.1. What Is a Thread?
      1. 6.1.1. User- and Kernel-Level Threads
      2. 6.1.2. Thread Context
      3. 6.1.3. Hardware Threads and Software Threads
      4. 6.1.4. Thread Resources
    2. 6.2. Comparing Threads to Processes
      1. 6.2.1. Context Switching
      2. 6.2.2. Throughput
      3. 6.2.3. Communicating between Entities
      4. 6.2.4. Corrupting Process Data
      5. 6.2.5. Killing the Entire Process
      6. 6.2.6. Reuse by Other Programs
      7. 6.2.7. Key Similarities and Differences between Threads and Processes
    3. 6.3. Setting Thread Attributes
    4. 6.4. The Architecture of a Thread
      1. 6.4.1. Thread States
      2. 6.4.2. Scheduling and Thread Contention Scope
      3. 6.4.3. Scheduling Policy and Priority
      4. 6.4.4. Scheduling Allocation Domains
    5. 6.5. A Simple Threaded Program
      1. 6.5.1. Compiling and Linking Threaded Programs
    6. 6.6. Creating Threads
      1. 6.6.1. Passing Arguments to a Thread
      2. 6.6.2. Program Profile 6-1
        1. 6.6.2.1. Program Name:
        2. 6.6.2.2. Description:
        3. 6.6.2.3. Libraries Required:
        4. 6.6.2.4. Headers Required:
        5. 6.6.2.5. Compile and Link Instructions:
        6. 6.6.2.6. Test Environment:
        7. 6.6.2.7. Processors:
        8. 6.6.2.8. Execution Instructions:
        9. 6.6.2.9. Notes:
      3. 6.6.3. Joining Threads
      4. 6.6.4. Getting the Thread Id
        1. 6.6.4.1. Comparing Thread Ids
      5. 6.6.5. Using the Pthread Attribute Object
        1. 6.6.5.1. Default Values for the Attribute Object
        2. 6.6.5.2. Creating Detached Threads Using the Pthread Attribute Object
    7. 6.7. Managing Threads
      1. 6.7.1. Terminating Threads
        1. 6.7.1.1. Self-Termination
        2. 6.7.1.2. Terminating Peer Threads
        3. 6.7.1.3. Understanding the Cancellation Process
          1. 6.7.1.3.1. Using Cancellation Points
          2. 6.7.1.3.2. Taking Advantage of Cancellation-Safe Library Functions and System Calls
          3. 6.7.1.3.3. Cleaning Up before Termination
      2. 6.7.2. Managing the Thread's Stack
        1. 6.7.2.1. Setting the Size of the Stack
        2. 6.7.2.2. Setting the Location of the Thread's Stack
        3. 6.7.2.3. Setting Stack Size and Location with One Function
      3. 6.7.3. Setting Thread Scheduling and Priorities
      4. 6.7.4. Setting Contention Scope of a Thread
      5. 6.7.5. Using sysconf()
      6. 6.7.6. Thread Safety and Libraries
        1. 6.7.6.1. Using Multithreaded Versions of Libraries and Functions
        2. 6.7.6.2. Thread Safe Standard Out
    8. 6.8. Extending the Thread Interface Class
      1. 6.8.1. Program Profile 6-2
        1. 6.8.1.1. Program Name:
        2. 6.8.1.2. Description:
        3. 6.8.1.3. Libraries Required:
        4. 6.8.1.4. Headers Required:
        5. 6.8.1.5. Compile & Link Instructions:
        6. 6.8.1.6. Test Environment:
        7. 6.8.1.7. Processors:
        8. 6.8.1.8. Execution Instructions:
    9. 6.9. Summary
  12. 7. Communication and Synchronization of Concurrent Tasks
    1. 7.1. Communication and Synchronization
      1. 7.1.1. Dependency Relationships
        1. 7.1.1.1. Communication Dependencies
        2. 7.1.1.2. Cooperation Dependencies
      2. 7.1.2. Counting Tasks Dependencies
      3. 7.1.3. What Is Interprocess Communication?
        1. 7.1.3.1. Persistence of IPC
        2. 7.1.3.2. Environment Variables and Command-Line Arguments
        3. 7.1.3.3. Files
        4. 7.1.3.4. File Descriptors
        5. 7.1.3.5. Shared Memory
        6. 7.1.3.6. Using POSIX Shared Memory
        7. 7.1.3.7. Pipes
          1. 7.1.3.7.1. Using Named Pipes (FIFO)
        8. 7.1.3.8. Program Profile 7-1
          1. 7.1.3.8.1. Program Name:
          2. 7.1.3.8.2. Description:
          3. 7.1.3.8.3. Libraries Required:
          4. 7.1.3.8.4. Headers Required:
          5. 7.1.3.8.5. Compile and Link Instructions:
          6. 7.1.3.8.6. Test Environment:
          7. 7.1.3.8.7. Processors:
          8. 7.1.3.8.8. Execution Instructions:
          9. 7.1.3.8.9. Notes:
          10. 7.1.3.8.10. FIFO Interface Class
        9. 7.1.3.9. Message Queue
          1. 7.1.3.9.1. Using a Message Queue
          2. 7.1.3.9.2. posix_queue: The Message Queue Interface Class
      4. 7.1.4. What Are Interthread Communications?
        1. 7.1.4.1. Global Data, Variables, and Data Structures
        2. 7.1.4.2. Program Profile 7-2
          1. 7.1.4.2.1. Program Name:
          2. 7.1.4.2.2. Description:
          3. 7.1.4.2.3. Libraries Required:
          4. 7.1.4.2.4. Headers Required:
          5. 7.1.4.2.5. Compile and Link Instructions:
          6. 7.1.4.2.6. Test Environment:
          7. 7.1.4.2.7. Processors:
          8. 7.1.4.2.8. Execution Instructions:
          9. 7.1.4.2.9. Notes:
        3. 7.1.4.3. Parameters for Interthread Communication
        4. 7.1.4.4. Program Profile 7-3
          1. 7.1.4.4.1. Program Name:
          2. 7.1.4.4.2. Description:
          3. 7.1.4.4.3. Libraries Required:
          4. 7.1.4.4.4. Headers Required:
          5. 7.1.4.4.5. Compile and Link Instructions:
          6. 7.1.4.4.6. Test Environment:
          7. 7.1.4.4.7. Processors:
          8. 7.1.4.4.8. Execution Instructions:
          9. 7.1.4.4.9. Notes:
        5. 7.1.4.5. File Handles for Interthread Communication
    2. 7.2. Synchronizing Concurrency
      1. 7.2.1. Types of Synchronization
      2. 7.2.2. Synchronizing Access to Data
        1. 7.2.2.1. Critical Sections
        2. 7.2.2.2. PRAM Model
          1. 7.2.2.2.1. Concurrent and Exclusive Memory Access
          2. 7.2.2.2.2. Concurrent Tasks: Coordinating Order of Execution
          3. 7.2.2.2.3. Relationships between Cooperating Tasks
          4. 7.2.2.2.4. Start-to-Start (SS) Relationship
          5. 7.2.2.2.5. Finish-to-Start (FS) Relationship
          6. 7.2.2.2.6. Start-to-Finish Relationship
          7. 7.2.2.2.7. Finish-to-Finish Relationship
      3. 7.2.3. Synchronization Mechanisms
        1. 7.2.3.1. Semaphores
          1. 7.2.3.1.1. Basic Semaphore Operations
          2. 7.2.3.1.2. Posix Semaphores
        2. 7.2.3.2. Program Profile 7-4
          1. 7.2.3.2.1. Program Name:
          2. 7.2.3.2.2. Description:
          3. 7.2.3.2.3. Libraries Required:
          4. 7.2.3.2.4. Headers Required:
          5. 7.2.3.2.5. Compile and Link Instructions:
          6. 7.2.3.2.6. Test Environment:
          7. 7.2.3.2.7. Processors:
          8. 7.2.3.2.8. Execution Instructions:
          9. 7.2.3.2.9. Notes:
          10. 7.2.3.2.10. Mutex Semaphores
          11. 7.2.3.2.11. Using the Mutex Attribute Object
          12. 7.2.3.2.12. Using Mutex Semaphores to Manage Critical Sections
        3. 7.2.3.3. Read-Write Locks
          1. 7.2.3.3.1. Using Read-Write Locks to Implement Access Policy
          2. 7.2.3.3.2. Object-Oriented Mutex Class
        4. 7.2.3.4. Program Profile 7-5
          1. 7.2.3.4.1. Program Name:
          2. 7.2.3.4.2. Description:
          3. 7.2.3.4.3. Libraries Required:
          4. 7.2.3.4.4. Headers Required:
          5. 7.2.3.4.5. Compile and Link Instructions:
          6. 7.2.3.4.6. Test Environment:
          7. 7.2.3.4.7. Processors:
          8. 7.2.3.4.8. Execution Instructions:
          9. 7.2.3.4.9. Notes:
        5. 7.2.3.5. Condition Variables
          1. 7.2.3.5.1. Using Condition Variables to Manage Synchronization Relationships
        6. 7.2.3.6. Program Profile 7-6
          1. 7.2.3.6.1. Program Name:
          2. 7.2.3.6.2. Description:
          3. 7.2.3.6.3. Libraries Required:
          4. 7.2.3.6.4. Headers Required:
          5. 7.2.3.6.5. Compile and Link Instructions:
          6. 7.2.3.6.6. Test Environment:
          7. 7.2.3.6.7. Processors:
          8. 7.2.3.6.8. Execution Instructions:
          9. 7.2.3.6.9. Notes:
        7. 7.2.3.7. Thread-Safe Data Structures
    3. 7.3. Thread Strategy Approaches
      1. 7.3.1. Delegation Model
      2. 7.3.2. Peer-to-Peer Model
      3. 7.3.3. Producer-Consumer Model
      4. 7.3.4. Pipeline Model
      5. 7.3.5. SPMD and MPMD for Threads
    4. 7.4. Decomposition and Encapsulation of Work
      1. 7.4.1. Problem Statement
      2. 7.4.2. Strategy
      3. 7.4.3. Observation
      4. 7.4.4. Problem and Solution
      5. 7.4.5. Simple Agent Model Example of a Pipeline
    5. 7.5. Summary
  13. 8. PADL and PBS: Approaches to Application Design
    1. 8.1. Designing Applications for Massive Multicore Processors
    2. 8.2. What Is PADL?
      1. 8.2.1. Layer 5: Application Architecture Selection
        1. 8.2.1.1. What Are Agents?
        2. 8.2.1.2. What Is a Multiagent Architecture?
        3. 8.2.1.3. From Problem Statements to Multiagent Architectures
          1. 8.2.1.3.1. The Strategy
          2. 8.2.1.3.2. An Observation
          3. 8.2.1.3.3. Problem and Solution Model as Multiagents
        4. 8.2.1.4. Blackboard Architectures
        5. 8.2.1.5. Approaches to Structuring the Blackboard
          1. 8.2.1.5.1. A "Question and Answer Browser" Blackboard Example
          2. 8.2.1.5.2. Where's the Parallelism? Where's the Blackboard?
          3. 8.2.1.5.3. Components of the Solution
          4. 8.2.1.5.4. Knowledge Sources for the Browser Program
          5. 8.2.1.5.5. Is a Blackboard a Good Fit?
          6. 8.2.1.5.6. The Blackboard as an Iterative Shared Solution Space
        6. 8.2.1.6. The Anatomy of a Knowledge Source
        7. 8.2.1.7. Concurrency Flexibility of the Application Architecture
      2. 8.2.2. Layer 4: Concurrency Models in PADL
      3. 8.2.3. Layer 3: The Implementation Model of PADL
        1. 8.2.3.1. C++ Components to the Rescue
          1. 8.2.3.1.1. The C++0x or C++09 Standard
          2. 8.2.3.1.2. A C++0x (C++09) Mutex Interface Class
          3. 8.2.3.1.3. Obtaining Early Implementations of the C++0x Concurrent Programming Libraries
        2. 8.2.3.2. The Intel Threading Building Blocks
          1. 8.2.3.2.1. Blackboard: A Critical Section
        3. 8.2.3.3. Program Profile 8-1
          1. 8.2.3.3.1. Program Name:
          2. 8.2.3.3.2. Description:
          3. 8.2.3.3.3. Libraries Required:
          4. 8.2.3.3.4. Additional Source Files Needed:
          5. 8.2.3.3.5. User-Defined Headers Required:
          6. 8.2.3.3.6. Compile and Link Instructions:
          7. 8.2.3.3.7. Test Environment:
          8. 8.2.3.3.8. Processors:
          9. 8.2.3.3.9. Notes:
          10. 8.2.3.3.10. Obtaining the TBB library
        4. 8.2.3.4. The Parallel STL Library
        5. 8.2.3.5. The "Implementation Layer" Mapping
        6. 8.2.3.6. PADL: Layer 3 Type Control Strategies
    3. 8.3. The Predicate Breakdown Structure (PBS)
      1. 8.3.1. An Example: PBS for the "Guess-My-Code" Game
      2. 8.3.2. Connecting PBS, PADL, and the SDLC
      3. 8.3.3. Coding the PBS
    4. 8.4. Summary
  14. 9. Modeling Software Systems That Require Concurrency
    1. 9.1. What Is UML?
    2. 9.2. Modeling the Structure of a System
      1. 9.2.1. The Class Model
      2. 9.2.2. Visualizing Classes
        1. 9.2.2.1. Visualizing Class Attributes, Services, and Responsibilities
        2. 9.2.2.2. Using Attribute and Operation Properties
      3. 9.2.3. Ordering the Attributes and Services
      4. 9.2.4. Visualizing Instances of a Class
      5. 9.2.5. Visualizing Template Classes
      6. 9.2.6. Showing the Relationship between Classes and Objects
      7. 9.2.7. Visualizing Interface Classes
      8. 9.2.8. The Organization of Interactive Objects
    3. 9.3. UML and Concurrent Behavior
      1. 9.3.1. Collaborating Objects
      2. 9.3.2. Multitasking and Multithreading with Processes and Threads
        1. 9.3.2.1. Diagramming Active Objects
        2. 9.3.2.2. Showing the Multiple Flows of Control and Communication
      3. 9.3.3. Message Sequences between Objects
      4. 9.3.4. The Activities of Objects
      5. 9.3.5. State Machines
        1. 9.3.5.1. Representing the Parts of a State
        2. 9.3.5.2. Diagramming Concurrent Substates
    4. 9.4. Visualizing the Whole System
    5. 9.5. Summary
  15. 10. Testing and Logical Fault Tolerance for Parallel Programs
    1. 10.1. Can You Just Skip the Testing?
    2. 10.2. Five Concurrency Challenges That Must Be Checked during Testing
    3. 10.3. Failure: The Result of Defects and Faults
      1. 10.3.1. Basic Testing Types
      2. 10.3.2. Defect Removal versus Defect Survival
    4. 10.4. How Do You Approach Defect Removal for Parallel Programs?
      1. 10.4.1. The Problem Statement
      2. 10.4.2. A Simple Strategy and Rough-Cut Solution Model
      3. 10.4.3. A Revised Solution Model Using Layer 5 from PADL
        1. 10.4.3.1. Revised Agent Model
        2. 10.4.3.2. The Concurrency Model for the Agents
      4. 10.4.4. The PBS of the Agent Solution Model
        1. 10.4.4.1. Declarative Implementation of the PBS
        2. 10.4.4.2. Program Profile 10-1
          1. 10.4.4.2.1. Program Name:
          2. 10.4.4.2.2. Description:
          3. 10.4.4.2.3. Libraries Required:
          4. 10.4.4.2.4. Additional Source Files Needed:
          5. 10.4.4.2.5. User-Defined Headers Required:
          6. 10.4.4.2.6. Compile and Link Instructions:
          7. 10.4.4.2.7. Test Environment:
          8. 10.4.4.2.8. Processors:
          9. 10.4.4.2.9. Notes:
        3. 10.4.4.3. How Do You Know This Code Works?
    5. 10.5. What Are the Standard Software Engineering Tests?
      1. 10.5.1. Software Verification and Validation
      2. 10.5.2. The Code Doesn't Work — Now What?
      3. 10.5.3. What Is Logical Fault Tolerance?
        1. 10.5.3.1. The Exception Handler
          1. 10.5.3.1.1. The runtime_error Classes
          2. 10.5.3.1.2. The logic_error Classes
          3. 10.5.3.1.3. Deriving New Exception Classes
          4. 10.5.3.1.4. Protecting the Exception Classes from Exceptions
        2. 10.5.3.2. A Simple Strategy for Implementing Logical Fault Tolerance
        3. 10.5.3.3. Testing and Logical Fault Tolerance
      4. 10.5.4. Predicate Exceptions and Possible Worlds
      5. 10.5.5. What Is Model Checking?
    6. 10.6. Summary
  16. A. UML for Concurrent Design
    1. A.1. Class and Object Diagrams
    2. A.2. Interaction Diagrams
      1. A.2.1. Collaboration Diagrams
      2. A.2.2. Sequence Diagrams
      3. A.2.3. Activity Diagrams
    3. A.3. State Diagrams
    4. A.4. Package Diagrams
  17. B. Concurrency Models
    1. B.1. Interprocess and Interthread Communication
    2. B.2. Boss/Worker Approach 1 with Threads
    3. B.3. Boss/Worker Approach 1 with Processes
    4. B.4. Boss/Worker Approach 2 with Threads
    5. B.5. Boss/Worker Approach 3 with Threads
    6. B.6. Peer-to-Peer Approach 1 with Threads
    7. B.7. Peer-to-Peer Approach 1 with Processes
    8. B.8. Peer-to-Peer Approach 2 with Threads
    9. B.9. Peer-to-Peer Approach 2 with Processes
    10. B.10. Workpile Approach 1
    11. B.11. Workpile Approach 2
    12. B.12. Pipeline Approach with Threads
    13. B.13. Producer/Consumer Approach 1 with Threads
    14. B.14. Producer/Consumer Approach 2 with Threads
    15. B.15. Producer/Consumer Approach 3 with Threads
    16. B.16. Monitor Approach
    17. B.17. Blackboard Approach with Threads
    18. B.18. Data Level Parallelism: SIMD Approach
    19. B.19. Data Level Parallelism: MIMD Approach
    20. B.20. PRAM Model
      1. B.20.1. CRCW — Concurrent Read Concurrent Write
      2. B.20.2. EREW — Exclusive Read Exclusive Write
      3. B.20.3. ERCW — Exclusive Read Concurrent Write
      4. B.20.4. CREW — Concurrent Read Exclusive Write
  18. C. POSIX Standard for Thread Management
    1. pthread_atfork()
    2. pthread_attr_destroy()
    3. pthread_attr_getdetachstate()
    4. pthread_attr_getguardsize()
    5. pthread_attr_getinheritsched()
    6. pthread_attr_getschedparam()
    7. pthread_attr_getschedpolicy()
    8. pthread_attr_getscope()
    9. pthread_attr_getstack()
    10. pthread_attr_getstackaddr()
    11. pthread_attr_getstacksize()
    12. pthread_attr_init()
    13. pthread_attr_setdetachstate()
    14. pthread_attr_setguardsize()
    15. pthread_attr_setinheritsched()
    16. pthread_attr_setschedparam()
    17. pthread_attr_setschedpolicy()
    18. pthread_attr_setscope()
    19. pthread_attr_setstack()
    20. pthread_attr_setstackaddr()
    21. pthread_attr_setstacksize()
    22. pthread_cancel()
    23. pthread_cond_broadcast()
    24. pthread_cond_destroy()
    25. pthread_cond_signal()
    26. pthread_cond_timedwait()
    27. pthread_condattr_destroy()
    28. pthread_condattr_getclock()
    29. pthread_condattr_getpshared()
    30. pthread_condattr_init()
    31. pthread_condattr_setclock()
    32. pthread_condattr_setpshared()
    33. pthread_create()
    34. pthread_detach()
    35. pthread_equal()
    36. pthread_exit()
    37. pthread_getconcurrency()
    38. pthread_getcpuclockid()
    39. pthread_getschedparam()
    40. pthread_getspecific()
    41. pthread_join()
    42. pthread_kill()
    43. pthread_mutex_destroy()
    44. pthread_mutex_getprioceiling()
    45. pthread_mutex_init()
    46. pthread_mutex_lock()
    47. pthread_mutex_setprioceiling()
    48. pthread_mutex_timedlock()
    49. pthread_mutex_trylock()
    50. pthread_mutexattr_destroy()
    51. pthread_mutexattr_getprioceiling()
    52. pthread_mutexattr_getprotocol()
    53. pthread_mutexattr_getpshared()
    54. pthread_mutexattr_gettype()
    55. pthread_mutexattr_init()
    56. pthread_mutexattr_setprioceiling()
    57. pthread_mutexattr_setprotocol()
    58. pthread_mutexattr_setpshared()
    59. pthread_mutexattr_settype()
    60. pthread_once()
    61. pthread_rwlock_destroy()
    62. pthread_rwlock_rdlock()
    63. pthread_rwlock_timedrdlock()
    64. pthread_rwlock_timedwrlock()
    65. pthread_rwlock_tryrdlock()
    66. pthread_rwlock_trywrlock()
    67. pthread_rwlock_unlock()
    68. pthread_rwlock_wrlock()
    69. pthread_rwlockattr_destroy()
    70. pthread_rwlockattr_getpshared()
    71. pthread_rwlockattr_init()
    72. pthread_rwlockattr_setpshared()
    73. pthread_self()
    74. pthread_setcancelstate()
    75. pthread_setconcurrency()
    76. pthread_setschedparam()
    77. Pthread_setschedprio()
    78. pthread_setspecific()
    79. pthread_testcancel()
  19. D. POSIX Standard for Process Management
    1. posix_spawn()
    2. posix_spawn_file_actions_addclose()
    3. posix_spawn_file_actions_adddup2()
    4. posix_spawn_file_actions_addopen()
    5. posix_spawn_file_actions_destroy()
    6. posix_spawnattr_destroy()
    7. posix_spawnattr_getflags()
    8. posix_spawnattr_getpgroup()
    9. pisix_spawnattr_getschedparam()
    10. posix_spawnattr_getschedpolicy()
  20. Bibliography