2.1. 2.1 Parallel Programming

Parallel Programming is based on the division of a processing task among multiple processors or processor cores that operate simultaneously. A parallel program is thus defined as the specification of a set of processes executing simultaneously, and communicating among themselves to achieve a common objective. The expected result is a faster computation compared to execution on a single-processor/core system. The main advantage of parallel programming is its ability to handle tasks of a scale that would not be realistic or cost-effective for other systems.

In theory, parallel programming should simply involve applying multiple processes to solve a single problem. In practice, however, parallel programming is often difficult and costly, since it requires greater effort from software designers, who must develop new forms of understanding and programming to suit a parallel execution environment. Moreover, techniques used in single processor/core systems for reviewing and correcting defects, as well as for improving the performance, are not directly applicable to parallel programming. Parallel execution environments, such as a multi-core processor, a network of workstations, a grid of personal computers or a high-performance parallel processing system, can be unstable and unpredictable, or simply non-deterministic. It is not uncommon for parallel programs to yield incorrect results or execute more slowly that their sequential counterparts even after months ...

Get Patterns for Parallel Software Design now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.