3.1 INTRODUCTION

Algorithms and multiprocessing architectures are closely tied together. We cannot think of a parallel algorithm without thinking of the parallel hardware that will support it. Conversely, we cannot think of parallel hardware without thinking of the parallel software that will drive it. Parallelism can be implemented at different levels in a computing system using hardware and software techniques:

1. Data-level parallelism, where we simultaneously operate on multiple bits of a datum or on multiple data. Examples of this are bit-parallel addition, multiplication, and division of binary numbers, vector processors, and systolic arrays for dealing with several data samples.

2. Instruction-level parallelism (ILP), where we simultaneously execute more than one instruction by the processor. An example of this is use of instruction pipelining.

3. Thread-level parallelism (TLP). A thread is a portion of a program that shares processor resources with other threads. A thread is sometimes called a lightweight process. In TLP, multiple software threads are executed simultaneously on one processor or on several processors.

4. Process-level parallelism. A process is a program that is running on the computer. A process reserves its own computer resources, such as memory space and registers. This is, of course, the classic multitasking and time-sharing computing where several programs are running simultaneously on one machine or on several machines.

Get Algorithms and Parallel Computing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.