2Hardware Architectures for Real-time Processing

2.1. History of image processing hardware platforms

In the 1960s, NASA Scientifics began to use digital cameras for digital image processing onto workstations with continually increasing computation power. When an image processing system required a real-time throughput, multiple boards with multiple computers working in parallel were used. In the 1980s, digital signal processors (DSPs) were created in order to accelerate the computation necessary for signal processing algorithms. DSPs helped to usher in the age of portable embedded computing.

The mid-1980s also saw the introduction of programmable logic devices such as the field programmable gate array (FPGA), a technology that aimed to unite the software flexibility through programmable logic with the speed of dedicated hardware such as application-specific integrated circuits (ASIC). In the 1990s, there was further growth in both DSP performance, through increased use of parallel processing techniques, and FPGA performance to meet the needs of multimedia devices [KEH 06]. The concept of system-on-chip (SoC) was introduced during this period; it sought to bring all necessary processing power for an entire system onto a single chip. In addition to these developments, the massive parallel computation power of graphics processing units (GPUs) is used for performing compute-intensive image processing algorithms; they are also considered to be embedded devices.

In the following ...

Get Architecture-Aware Optimization Strategies in Real-time Image Processing now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.