Chapter 24. Concurrent and Multicore Programming

As we write this book, the landscape of CPU architecture is changing more rapidly than it has in decades.

Defining Concurrency and Parallelism

A concurrent program needs to perform several possibly unrelated tasks at the same time. Consider the example of a game server: it is typically composed of dozens of components, each of which has complicated interactions with the outside world. One component might handle multiuser chat; several more will process players’ inputs and also feed state updates back to them; while yet another performs physics calculations.

The correct operation of a concurrent program does not require multiple cores, though they may improve performance and responsiveness.

In contrast, a parallel program solves a single problem. Consider a financial model that attempts to predict the next minute of fluctuations in the price of a single stock. If we want to apply this model to every stock listed on an exchange—for example, to estimate which ones we should buy and sell—we hope to get an answer more quickly if we run the model on 500 cores than if we use just 1. As this suggests, a parallel program does not usually depend on the presence of multiple cores to work correctly.

Another useful distinction between concurrent and parallel programs lies in their interaction with the outside world. By definition, a concurrent program deals continuously with networking protocols, databases, and the like. A typical parallel program ...

Get Real World Haskell now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.