In Chapter 19, we saw how a thread provides a parallel execution path. We took for granted that whenever you needed to run something in parallel, you could assign a new or pooled thread to the job. Although this usually holds true, there are exceptions. Suppose you were writing a TCP sockets or web server application that needed to process 1,000 concurrent requests. If you dedicated a thread to each incoming request, you would consume a gigabyte of memory purely on thread overhead.
Asynchronous methods address this problem through a pattern by which many concurrent activities are handled by a few pooled threads. This makes it possible to write highly concurrent applications—as well as highly thread-efficient applications.
The problem just described might be insoluble if every thread needed to be busy all of the time. But this is not the case: fetching a web page, for instance, might take up to several seconds from start to end (because of a potentially slow connection) and yet consume only a fraction of a millisecond of CPU time in total. Processing an HTTP request is not computationally intensive.
This means that a thread dedicated to processing a single web request might spend 99 percent of its time blocked—representing huge economy potential. The asynchronous method pattern exploits just this potential, allowing a handful ...