Chapter 10. Disk Array Performance

The rapid pace of improvements in the performance and density of semiconductor technology widens the gap between the performance of processors and secondary storage devices. According to the oft-quoted Amdahl’s law, the performance of a computer system depends on the speed of its slowest serial component. In a modern computer system, this is often the physical disks. In addition, as applications get faster, they place even heavier demands on the secondary memory subsystem.

These concerns have caused academic and industry researchers to turn to the performance of the disk I/O subsystem. Parallel processing proved successful in advancing the performance of computer systems by using multiple processors concurrently to solve a single computing problem. The same principle applied to the disk I/O subsystem gave birth to redundant arrays of inexpensive disks (RAID). In general terms, a disk array refers to the partitioning of a request into smaller requests that are serviced in parallel by multiple disks, resulting in faster response times. The partitioning of the request into pieces is done by either the operating system or the controller hardware, depending on the specific configuration. In both cases, this partitioning is transparent to the user. Because this disk striping generates disk reliability issues, the disk industry has devised various ways to add redundant data to the data recording scheme to supply fault tolerance. Many of today’s RAID ...

Get Windows 2000 Performance Guide now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.