Get the best possible performance from your IDE hardware
By default, most Linux distributions use the default kernel parameters when accessing your IDE controller and drives. These settings are very conservative and are designed to protect your data at all costs. But as many have come to discover, safe almost never equals fast. And with large volume data processing applications, there is no such thing as "fast enough."
If you want to get the most performance out of your IDE hardware, take a look at the hdparm(8) command. It will not only tell you how your drives are currently performing, but will let you tweak them to your heart's content.
It is worth pointing out that under some circumstances, these commands CAN CAUSE UNEXPECTED DATA CORRUPTION! Use them at your own risk! At the very least, back up your box and bring it down to single-user mode before proceeding.
Let's begin. Now that we're in single user mode (which we discussed in [Hack #2]), let's find out how well the primary drive is currently performing:
hdparm -Tt /dev/hda
You should see something like:
/dev/hda: Timing buffer-cache reads: 128 MB in 1.34 seconds =95.52 MB/sec Timing buffered disk reads: 64 MB in 17.86 seconds = 3.58 MB/sec
What does this tell us? The
-T means to test the
cache system (i.e., the memory, CPU, and buffer cache). The
-t means to report stats on the disk in question,
reading data not in the cache. The two together, run a couple of
times in a row in single-user mode, will give you an idea of the
performance of your disk I/O system. (These are actual numbers from a
PII/350/128M Ram/EIDE HD; your numbers will vary.)
But even with varying numbers, 3.58 MB/sec is pathetic for the above hardware. I thought the ad for the HD said something about 66 MB per second!!?!? What gives?
Let's find out more about how Linux is addressing this drive:
# hdparm /dev/hda /dev/hda: multcount = 0 (off) I/O support = 0 (default 16-bit) unmaskirq = 0 (off) using_dma = 0 (off) keepsettings = 0 (off) nowerr = 0 (off) readonly = 0 (off) readahead = 8 (on) geometry = 1870/255/63, sectors = 30043440, start = 0
These are the defaults. Nice, safe, but not necessarily optimal. What's all this about 16-bit mode? I thought that went out with the 386!
These settings are virtually guaranteed to work on any hardware you might throw at it. But since we know we're throwing something more than a dusty, 8-year-old, 16-bit multi-IO card at it, let's talk about the interesting options:
When this feature is enabled, it typically reduces operating system overhead for disk I/O by 30-50%. On many systems, it also provides increased data throughput of anywhere from 5% to 50%.
This is a big one. This flag controls how data is passed from the PCI bus to the controller. Almost all modern controller chipsets support mode 3, or 32-bit mode w/sync. Some even support 32-bit async. Turning this on will almost certainly double your throughput (see below).
Turning this on will allow Linux to unmask other interrupts while processing a disk interrupt. What does that mean? It lets Linux attend to other interrupt-related tasks (i.e., network traffic) while waiting for your disk to return with the data it asked for. It should improve overall system response time, but be warned: not all hardware configurations will be able to handle it. See the manpage.
DMA can be a tricky business. If you can get your controller and drive using a DMA mode, do it. However, I have seen more than one machine hang while playing with this option. Again, see the manpage.
Let's try out some turbo settings:
# hdparm -c3 -m16 /dev/hda /dev/hda: setting 32-bit I/O support flag to 3 setting multcount to 16 multcount = 16 (on) I/O support = 3 (32-bit w/sync)
Great! 32-bit sounds nice. And some multi-reads might work. Let's re-run the benchmark:
# hdparm -tT /dev/hda /dev/hda: Timing buffer-cache reads: 128 MB in 1.41 seconds =90.78 MB/sec Timing buffered disk reads: 64 MB in 9.84 seconds = 6.50 MB/sec
Hmm, almost double the disk throughput without really trying! Incredible.
But wait, there's more: we're still not unmasking interrupts, using DMA, or even a using decent PIO mode! Of course, enabling these gets riskier. The manpage mentions trying Multiword DMA mode2, so let's try this:
# hdparm -X34 -d1 -u1 /dev/hda
Unfortunately this seems to be unsupported on this particular box (it hung like an NT box running a Java application) So, after rebooting it (again in single-user mode), I went with this:
# hdparm -X66 -d1 -u1 -m16 -c3 /dev/hda /dev/hda: setting 32-bit I/O support flag to 3 setting multcount to 16 setting unmaskirq to 1 (on) setting using_dma to 1 (on) setting xfermode to 66 (UltraDMA mode2) multcount = 16 (on) I/O support = 3 (32-bit w/sync) unmaskirq = 1 (on) using_dma = 1 (on)
And then checked:
# hdparm -tT /dev/hda /dev/hda: Timing buffer-cache reads: 128 MB in 1.43 seconds =89.51 MB/sec Timing buffered disk reads: 64 MB in 3.18 seconds =20.13 MB/sec
20.13 MB/sec. A far cry from the miniscule 3.58 with which we started.
Did you notice how we specified the -m16 and -c3 switch again? That's because it doesn't remember your hdparm settings between reboots. Be sure to add the above line to your /etc/rc.d/* scripts once you're sure the system is stable (and preferably after your fsck runs; running an extensive filesystem check with your controller in a flaky mode may be a good way to generate vast quantities of entropy, but it's no way to administer a system. At least not with a straight face).
If you can't find hdparm on your system (usually in /sbin or /usr/sbin), get it from the source at http://metalab.unc.edu/pub/Linux/system/hardware/.