You are previewing Professional Parallel Programming with C#: Master Parallel Extensions With .NET 4.

Professional Parallel Programming with C#: Master Parallel Extensions With .NET 4

  1. Copyright
  2. CREDITS
  3. ABOUT THE AUTHOR
  4. ABOUT THE TECHNICAL EDITOR
  5. ACKNOWLEDGMENTS
  6. FOREWORD
  7. INTRODUCTION
    1. WHO THIS BOOK IS FOR
    2. WHAT THIS BOOK COVERS
    3. HOW THIS BOOK IS STRUCTURED
    4. WHAT YOU NEED TO USE THIS BOOK
    5. CONVENTIONS
    6. SOURCE CODE
    7. ERRATA
    8. P2P.WROX.COM
  8. 1. Task-Based Programming
    1. 1.1. WORKING WITH SHARED-MEMORY MULTICORE
      1. 1.1.1. Differences Between Shared-Memory Multicore and Distributed-Memory Systems
      2. 1.1.2. Parallel Programming and Multicore Programming
    2. 1.2. UNDERSTANDING HARDWARE THREADS AND SOFTWARE THREADS
    3. 1.3. UNDERSTANDING AMDAHL'S LAW
    4. 1.4. CONSIDERING GUSTAFSON'S LAW
    5. 1.5. WORKING WITH LIGHTWEIGHT CONCURRENCY
    6. 1.6. CREATING SUCCESSFUL TASK-BASED DESIGNS
      1. 1.6.1. Designing With Concurrency in Mind
      2. 1.6.2. Understanding the Differences between Interleaved Concurrency, Concurrency, and Parallelism
      3. 1.6.3. Parallelizing Tasks
      4. 1.6.4. Minimizing Critical Sections
      5. 1.6.5. Understanding Rules for Parallel Programming for Multicore
    7. 1.7. PREPARING FOR NUMA AND HIGHER SCALABILITY
    8. 1.8. DECIDING THE CONVENIENCE OF GOING PARALLEL
    9. 1.9. SUMMARY
  9. 2. Imperative Data Parallelism
    1. 2.1. LAUNCHING PARALLEL TASKS
      1. 2.1.1. System.Threading.Tasks.Parallel Class
      2. 2.1.2. Parallel.Invoke
    2. 2.2. TRANSFORMING SEQUENTIAL CODE TO PARALLEL CODE
      1. 2.2.1. Detecting Parallelizable Hotspots
      2. 2.2.2. Measuring Speedups Achieved by Parallel Execution
      3. 2.2.3. Understanding the Concurrent Execution
    3. 2.3. PARALLELIZING LOOPS
      1. 2.3.1. Parallel.For
      2. 2.3.2. Parallel.ForEach
      3. 2.3.3. Exiting from Parallel Loops
    4. 2.4. SPECIFYING THE DESIRED DEGREE OF PARALLELISM
      1. 2.4.1. ParallelOptions
      2. 2.4.2. Counting Hardware Threads
      3. 2.4.3. Logical Cores Aren't Physical Cores
    5. 2.5. USING GANTT CHARTS TO DETECT CRITICAL SECTIONS
    6. 2.6. SUMMARY
  10. 3. Imperative Task Parallelism
    1. 3.1. CREATING AND MANAGING TASKS
      1. 3.1.1. System.Theading.Tasks.Task
      2. 3.1.2. Understanding a Task's Status and Lifecycle
      3. 3.1.3. Using Tasks to Parallelize Code
      4. 3.1.4. Waiting for Tasks to Finish
      5. 3.1.5. Forgetting About Complex Threads
      6. 3.1.6. Cancelling Tasks Using Tokens
      7. 3.1.7. Returning Values from Tasks
      8. 3.1.8. TaskCreationOptions
      9. 3.1.9. Chaining Multiple Tasks Using Continuations
      10. 3.1.10. Preparing the Code for Concurrency and Parallelism
    2. 3.2. SUMMARY
  11. 4. Concurrent Collections
    1. 4.1. UNDERSTANDING THE FEATURES OFFERED BY CONCURRENT COLLECTIONS
      1. 4.1.1. System.Collections.Concurrent
      2. 4.1.2. ConcurrentQueue
      3. 4.1.3. Understanding a Parallel Producer-Consumer Pattern
      4. 4.1.4. ConcurrentStack
      5. 4.1.5. Transforming Arrays and Unsafe Collections into Concurrent Collections
      6. 4.1.6. ConcurrentBag
      7. 4.1.7. IProducerConsumerCollection
      8. 4.1.8. BlockingCollection
      9. 4.1.9. ConcurrentDictionary
    2. 4.2. SUMMARY
  12. 5. Coordination Data Structures
    1. 5.1. USING CARS AND LANES TO UNDERSTAND THE CONCURRENCY NIGHTMARES
      1. 5.1.1. Undesired Side Effects
      2. 5.1.2. Race Conditions
      3. 5.1.3. Deadlocks
      4. 5.1.4. A Lock-Free Algorithm with Atomic Operations
      5. 5.1.5. A Lock-Free Algorithm with Local Storage
    2. 5.2. UNDERSTANDING NEW SYNCHRONIZATION MECHANISMS
    3. 5.3. WORKING WITH SYNCHRONIZATION PRIMITIVES
      1. 5.3.1. Synchronizing Concurrent Tasks with Barriers
      2. 5.3.2. Barrier and ContinueWhenAll
      3. 5.3.3. Catching Exceptions in all Participating Tasks
      4. 5.3.4. Working with Timeouts
      5. 5.3.5. Working with a Dynamic Number of Participants
    4. 5.4. WORKING WITH MUTUAL-EXCLUSION LOCKS
      1. 5.4.1. Working with Monitor
      2. 5.4.2. Working with Timeouts for Locks
      3. 5.4.3. Refactoring Code to Avoid Locks
    5. 5.5. USING SPIN LOCKS AS MUTUAL-EXCLUSION LOCK PRIMITIVES
      1. 5.5.1. Working with Timeouts
      2. 5.5.2. Working with Spin-Based Waiting
      3. 5.5.3. Spinning and Yielding
      4. 5.5.4. Using the Volatile Modifier
    6. 5.6. WORKING WITH LIGHTWEIGHT MANUAL RESET EVENTS
      1. 5.6.1. Working with ManualResetEventSlim to Spin and Wait
      2. 5.6.2. Working with Timeouts and Cancellations
      3. 5.6.3. Working with ManualResetEvent
    7. 5.7. LIMITING CONCURRENCY TO ACCESS A RESOURCE
      1. 5.7.1. Working with SemaphoreSlim
      2. 5.7.2. Working with Timeouts and Cancellations
      3. 5.7.3. Working with Semaphore
    8. 5.8. SIMPLIFYING DYNAMIC FORK AND JOIN SCENARIOS WITH COUNTDOWNEVENT
    9. 5.9. WORKING WITH ATOMIC OPERATIONS
    10. 5.10. SUMMARY
  13. 6. PLINQ: Declarative Data Parallelism
    1. 6.1. TRANSFORMING LINQ INTO PLINQ
      1. 6.1.1. ParallelEnumerable and Its AsParallel Method
      2. 6.1.2. AsOrdered and the orderby Clause
    2. 6.2. SPECIFYING THE EXECUTION MODE
    3. 6.3. UNDERSTANDING PARTITIONING IN PLINQ
    4. 6.4. PERFORMING REDUCTION OPERATIONS WITH PLINQ
    5. 6.5. CREATING CUSTOM PLINQ AGGREGATE FUNCTIONS
    6. 6.6. CONCURRENT PLINQ TASKS
    7. 6.7. CANCELLING PLINQ
    8. 6.8. SPECIFYING THE DESIRED DEGREE OF PARALLELISM
      1. 6.8.1. WithDegreeOfParallelism
      2. 6.8.2. Measuring Scalability
    9. 6.9. WORKING WITH FORALL
      1. 6.9.1. Differences Between foreach and ForAll
      2. 6.9.2. Measuring Scalability
    10. 6.10. CONFIGURING HOW RESULTS ARE RETURNED BY USING WITHMERGEOPTIONS
    11. 6.11. HANDLING EXCEPTIONS THROWN BY PLINQ
    12. 6.12. USING PLINQ TO EXECUTE MAPREDUCE ALGORITHMS
    13. 6.13. DESIGNING SERIAL STAGES USING PLINQ
      1. 6.13.1. Locating Processing Bottlenecks
    14. 6.14. SUMMARY
  14. 7. Visual Studio 2010 Task Debugging Capabilities
    1. 7.1. TAKING ADVANTAGE OF MULTI-MONITOR SUPPORT
    2. 7.2. UNDERSTANDING THE PARALLEL TASKS DEBUGGER WINDOW
    3. 7.3. VIEWING THE PARALLEL STACKS DIAGRAM
    4. 7.4. FOLLOWING THE CONCURRENT CODE
      1. 7.4.1. Debugging Anonymous Methods
      2. 7.4.2. Viewing Methods
      3. 7.4.3. Viewing Threads in the Source Code
    5. 7.5. DETECTING DEADLOCKS
    6. 7.6. SUMMARY
  15. 8. Thread Pools
    1. 8.1. GOING DOWNSTAIRS FROM THE TASKS FLOOR
    2. 8.2. UNDERSTANDING THE NEW CLR 4 THREAD POOL ENGINE
      1. 8.2.1. Understanding Global Queues
      2. 8.2.2. Waiting for Worker Threads to Finish Their Work
      3. 8.2.3. Tracking a Dynamic Number of Worker Threads
      4. 8.2.4. Using Tasks Instead of Threads to Queue Jobs
      5. 8.2.5. Understanding the Relationship Between Tasks and the Thread Pool
      6. 8.2.6. Understanding Local Queues and the Work-Stealing Algorithm
      7. 8.2.7. Specifying a Custom Task Scheduler
    3. 8.3. SUMMARY
  16. 9. Asynchronous Programming Model
    1. 9.1. MIXING ASYNCHRONOUS PROGRAMMING WITH TASKS
      1. 9.1.1. Working with TaskFactory.FromAsync
      2. 9.1.2. Programming Continuations After Asynchronous Methods End
      3. 9.1.3. Combining Results from Multiple Concurrent Asynchronous Operations
      4. 9.1.4. Performing Asynchronous WPF UI Updates
      5. 9.1.5. Performing Asynchronous Windows Forms UI Updates
      6. 9.1.6. Creating Tasks that Perform EAP Operations
      7. 9.1.7. Working with TaskCompletionSource
    2. 9.2. SUMMARY
  17. 10. Parallel Testing and Tuning
    1. 10.1. PREPARING PARALLEL TESTS
      1. 10.1.1. Working with Performance Profiling Features
      2. 10.1.2. Measuring Concurrency
    2. 10.2. SOLUTIONS TO COMMON PATTERNS
      1. 10.2.1. Serialized Execution
      2. 10.2.2. Lock Contention
      3. 10.2.3. Lock Convoys
      4. 10.2.4. Oversubscription
      5. 10.2.5. Undersubscription
      6. 10.2.6. Partitioning Problems
      7. 10.2.7. Workstation Garbage-Collection Overhead
      8. 10.2.8. Working with the Server Garbage Collector
      9. 10.2.9. I/O Bottlenecks
      10. 10.2.10. Main Thread Overload
    3. 10.3. UNDERSTANDING FALSE SHARING
    4. 10.4. SUMMARY
  18. 11. Vectorization, SIMD Instructions, and Additional Parallel Libraries
    1. 11.1. UNDERSTANDING SIMD AND VECTORIZATION
    2. 11.2. FROM MMX TO SSE4.X AND AVX
    3. 11.3. USING THE INTEL MATH KERNEL LIBRARY
      1. 11.3.1. Working with Multicore-Ready, Highly Optimized Software Functions
      2. 11.3.2. Mixing Task-Based Programming with External Optimized Libraries
      3. 11.3.3. Generating Pseudo-Random Numbers in Parallel
    4. 11.4. USING INTEL INTEGRATED PERFORMANCE PRIMITIVES
    5. 11.5. SUMMARY
  19. A. .NET 4 Parallelism Class Diagrams
    1. A.1. TASK PARALLEL LIBRARY
      1. A.1.1. System.Threading.Tasks.Parallel Classes and Structures
      2. A.1.2. Task Classes, Enumerations, and Exceptions
    2. A.2. DATA STRUCTURES FOR COORDINATION IN PARALLEL PROGRAMMING
      1. A.2.1. Concurrent Collection Classes: System.Collections.Concurrent
      2. A.2.2. Lightweight Synchronization Primitives
      3. A.2.3. Lazy Initialization Classes
    3. A.3. PLINQ
    4. A.4. THREADING
      1. A.4.1. Thread and ThreadPool Classes and Their Exceptions
      2. A.4.2. Signaling Classes
      3. A.4.3. Threading Structures, Delegates, and Enumerations
      4. A.4.4. BackgroundWorker Component
  20. B. Concurrent UML Models
    1. B.1. STRUCTURE DIAGRAMS
      1. B.1.1. Class Diagram
      2. B.1.2. Component Diagram
      3. B.1.3. Deployment Diagram
      4. B.1.4. Package Diagram
    2. B.2. BEHAVIOR DIAGRAMS
      1. B.2.1. Activity Diagram
      2. B.2.2. Use Case Diagram
    3. B.3. INTERACTION DIAGRAMS
      1. B.3.1. Interaction Overview Diagram
      2. B.3.2. Sequence Diagram
  21. C. Parallel Extensions Extras
    1. C.1. INSPECTING PARALLEL EXTENSIONS EXTRAS
    2. C.2. COORDINATION DATA STRUCTURES
    3. C.3. EXTENSIONS
    4. C.4. PARALLEL ALGORITHMS
    5. C.5. PARTITIONERS
    6. C.6. TASK SCHEDULERS
O'Reilly logo

Chapter 8. Thread Pools

WHAT'S IN THIS CHAPTER?

  • Understanding the improved thread pool engine

  • Requesting work items to run in threads in the thread pool

  • Using lightweight synchronization primitives with threads

  • Coordinating worker threads

  • Using tasks instead of threads to queue jobs

  • Understanding local queues, work-stealing mechanisms, and fine-grained parallelism

  • Specifying a custom task scheduler

This chapter is about the changes in the Common Language Runtime (CLR) thread pool engine introduced by .NET Framework 4. It is important to understand the differences between using tasks and directly requesting work items to run in threads in the thread pool. If you have worked with the thread pool, you can take advantage of the new improvements and move your code to a task-based programming model. This chapter also provides an example of a customized task scheduler.

GOING DOWNSTAIRS FROM THE TASKS FLOOR

In previous chapters, you created tasks to parallelize the execution of code. In some cases, you didn't write statements to create Task instances; instead, you used .NET Framework 4's new classes and methods that created the necessary tasks to parallelize the execution. For example, Parallel.Invoke, Parallel.For, Parallel.ForEach, and PLINQ (among others) create tasks under the hood.

Figure 8-1 shows a simple staircase with three floors, Tasks, Threads, and CLR thread pool engine. The Tasks floor typically has some tasks assigned to threads and other tasks waiting to be assigned to threads. If ...

The best content for your career. Discover unlimited learning on demand for around $1/day.