Cover by Alex Davies

Safari, the world’s most comprehensive technology and business learning platform.

Find the exact information you need to solve a problem on the fly, or go deeper to master the technologies and skills you need to succeed

Start Free Trial

No credit card required

O'Reilly logo

Chapter 10. Parallelism Using Async

Async provides a great opportunity to start making more use of the parallelism of modern machines. The language feature makes previously difficult approaches to structuring programs easier.

For starters, we’ve already seen we can write simple code that starts multiple long-running operations, for example network requests, which then proceed in parallel. Using tools like WhenAll, async code can be very efficient at this kind of operation—one that doesn’t involve local computation. However, when local computation is involved, async on its own doesn’t help. Until a source of asynchrony is reached, all the code you write runs synchronously on the calling thread.

await and locks

The simplest way to introduce parallelism is to schedule work in different threads. Task.Run makes this easy, and because it returns a Task, we can treat it like any other long-running operation. But using multiple threads introduces risks of unsafe access to shared objects in memory.

The traditional solution of the lock keyword is more complicated when using async, as we discussed in lock Blocks. The await keyword can’t be used in a lock block, so there’s no way to prevent execution of conflicting code while you’re awaiting something. In fact, it’s best to avoid reserving any resources across an await keyword. The whole point of async is that resources are released while awaiting, and as programmers, we need to be aware that anything can happen at that time.

lock (sync)
{
    // Prepare ...

Find the exact information you need to solve a problem on the fly, or go deeper to master the technologies and skills you need to succeed

Start Free Trial

No credit card required