go to Elijah Laboratories Inc Home Page

Gentoo Linux RAID Benchmark Timimg Study


Gentoo Linux RAID Benchmark Timimg Study

R. J. Brown, Elijah Laboratories Inc.
rj@elilabs.com
08-Oct-2004

Introduction

A study was conducted on a hardware platform intended to be replicated and used for certain disk intensive applications (network monitoring, and network filesystem backup) to determine performance characteristics of various RAID configurations running under the Gentoo distrubution of the Linux operating system. For the targeted applications, disk write performance is considered the limiting factor. The goal was to provide information to be used to help decide which operating system version and which RAID configuration was best suited to supporting these tasks.

It is desirable for the targeted applications to have the redundancy provided by RAID. In addition, the mirroring capability provided by RAID-1 makes it possible to instantly take a complete disk drive containing the entire data of interest off-line without affecting the ability of the application to continue to operate. This is very desirable if a system or network is compromised by an intrusion exploit or otherwise exhibits unusual behavior. In particular, the ramifications of being able to instantly preserve the state of the system have applications to post-event forensics, and to evidence gathering used for legal prosecution.

Hardware Environment

The hardware used was a mid-level PC motherboard: K7 Triton GA-7S748 fitted with an AMD Athlon XP 2000+ CPU and 1 GB of DDR DRAM. The disk drives used in the RAID were three (3) WDC WD2500JB-00GVA0, ATA DISK drives.

Test Scenario

After some initial trial runs, it was determined that the default reconstruction speed limit parameters in /proc/sys/dev/raid were unsuitable for the targeted applications. These parameters were modified by the following commands:

echo 20000 > /proc/sys/dev/raid/speed_limit_max
(only for 2.4.26-gentoo-r9 RAID5, or else only the reconstruction daemon runs)

echo 5000 > /proc/sys/dev/raid/speed_limit_min
(only for 2.6.8-gentoo-r3 RAID5, so reconstruction will finish in 6 hrs)

echo 10000 > /proc/sys/dev/raid/speed_limit_min
(only for 2.6.8-gentoo-r3 RAID1, so reconstruction will finish in 6 hrs)

The actual read and write tests were performed by the following commands:

Write test: dd if=/dev/zero count=<sectors> of=/dev/<drive>

Read test: dd of=/dev/null count=<sectors> if=/dev/<drive>

The execution of these commands was timed by the time command.

Results

All tests were performed directly on /dev/md0 as a raw device, without any filesystem being present on that device; therefore, these results represent the performance of the RAID only, and not that of any particular filesystem. Any filesystem overhead will add to these times accordingly. This test only measured the performance of the RAID itself.

This study clearly shows the speed advantage of the 2.6.8 kernel over the 2.4.26 kernel, especially for disk write operations. In fact, the 2.6.8 kernel proved so superior that the 2.4.26 kernel was not even tested with a RAID-1 configuration. The only situation in which the performance of the 2.4.26 kernel was superior was in disk read operations. Since the limiting factor in the targeted applications is disk write speed, the faster read speed of the 2.4.26 kernel was not considered to be a significant advantage. For certain other applications, this better read speed could be a consideration that might influence a decision to use the 2.4.26 kernel instead of the 2.6.8 kernel.

As mentioned above, the mirroring of RAID-1 is desirable in the targeted applications. It was expected that the write speed of RAID-1 might have been significantly slower than the write speed of RAID-5, but this proved to not be the case. Furthermore, the cpu idle time observed during the write tests showed a lower cpu utilization for RAID-1 writes than for RAID-5 writes, as there is no parity calculation required for RAID-1 writes. Even with 3 active mirror drives, the performance of RAID-1 was superior. Likewise, the performance of RAID-1 was superior when the array was oeprating in a degraded mode with only one drive, and during reconstruction after a degradation had been remediated by the addition of a fresh drive.

Tabulated Data

2.4.26-gentoo-r9

Write Secs MB/sec
Read Secs MB/sec
/dev/hde 07:29.77 449.77 11.38
01:24.48 84.48 60.61
/dev/hdg 07:36.83 456.83 11.21
01:20.91 80.91 63.28
/dev/hdi 07:32.75 452.75 11.31
01:20.86 80.86 63.32








Hde & hdg 09:56.73 596.73 17.16
01:28.16 88.16 116.15
Hde & hdi 09:56.76 596.76 17.16
01:30.67 90.67 112.93
Hdg & hdi 09:57.03 597.03 17.15
01:30.90 90.9 112.65








Hde & hdg & hdi 15:04.09 904.09 16.99
02:12.59 132.59 115.84








/dev/md0 (RAID-5) 08:26.99 506.99 10.1
01:01.57 61.57 83.15
Failed drive 07:19.15 439.15 11.66
01:26.37 86.37 59.28
Reconstructing 22:27.56 1347.56 3.8
03:35.83 215.83 23.72

2.6.8-gentoo-r3

Write Secs MB/sec
Read Secs MB/sec
/dev/hde 04:34.78 274.78 18.633
01:24.64 84.64 60.49
/dev/hdg 04:30.93 270.93 18.898
01:21.04 81.04 63.18
/dev/hdi 04:31.14 271.14 18.884
01:20.94 80.94 63.26








Hde & hdg 05:13.08 313.08 32.707
01:26.86 86.86 117.89
Hde & hdi 05:27.28 327.28 31.288
01:30.74 90.74 112.84
Hdg & hdi 05:23.42 323.42 31.661
01:31.00 91.01 112.52








Hde & hdg & hdi 07:21.51 441.51 34.790
02:12.02 132.02 116.34








/dev/md0 (RAID-5) 05:40.78 340.78 15.024
01:05.99 65.99 77.58
Failed drive 07:33.83 453.83 11.282
01:36.49 96.49 53.06
Reconstructing 08:59.93 539.93 9.483
01:53.34 113.34 45.17








/dev/md0 (RAID-1) 05:02.77 302.77 16.910
01:24.76 84.76 60.4
Failed drive 04:55.41 295.41 17.332
01:21.24 81.24 63.02
Reconstructing 06:51.13 411.13 12.454
01:33.39 93.39 54.83
3 active drives 05:24.07 324.07 15.799
01:24.75 84.75 60.41

Addendum

The benchmark testing reported in this study was continued after the above report was written. As it turned out, it was necessary, for several reasons, to switch to the 2.6.7 kernel using the hardened-dev-sources. It was discovered that EVMS did not work properly under 2.6.8, and neither did cdrecord when burning DVDs. After the switch to 2.6.7, the benchmarks were re-run on that version of the kernel, and then the various filesystems were also benchmarked while running RAID-1 in partitions rather than on the whole disk.

The platform being developed needed to run RAID-1 on 3 seperate disks, and needed to run it in partitions, to allow for the possibility that a replacement drive in the future might not be exactly the same size as the drives the array was originally constructed with. It also needed to run with the entire RAID-1 encrypted, so that when a drive was removed and kept as a backup copy, theft of the drive would not result in a compromise of the data contained on it. To this end, timing studies were conducted using encryption on RAID-1 running in partitions on 3 seperate drives under reiserfs, as that seemed like the optimal filesystem for this project, given the results that had already been obtained.

Finally, studies were run to determione the effect of the chunk-size parameter in the /etc/raidtab file. As it turned out, that parameter seems to have little effect on the speed of the RAID-1 psuedo-device.

So here are the additional results:

2.6.7-gentoo-r3 (hardened-dev-sources)

Write Secs MB/sec
Read Secs MB/sec
/dev/hda n/a n/a n/a
01:45.52 105.524 48.520
/dev/hde 04:07.01 247.008 20.728
01:24.35 84.351 60.699
/dev/hdg 04:02.18 242.179 21.141
01:20.66 80.657 63.479
/dev/hdi 04:09.97 249.967 20.483
01:20.92 80.918 63.274









Hde & hdg 04:52.04 292.041 35.064
01:29.29 89.286 114.688
Hde & hdi 05:17.71 317.711 32.231
01:32.48 92.476 110.731
Hdg & hdi 05:18.93 318.933 32.107
01:32.49 92.491 110.713









Hde & hdg & hdi 07:29.26 449.258 34.190
02:14.53 134.526 114.179









/dev/md0 (RAID-1) 04:41.12 281.117 18.213
01:21.07 81.067 63.158
Failed drive 04:20.69 260.691 19.640
01:21.55 81.552 62.782
Reconstructing 04:39.20 279.203 18.338
01:25.92 85.923 59.588
3 active drives 05:33.87 333.873 15.335
01:21.37 81.369 62.923









RAID-1 with 3 active drives and a filesystem

(cd /; tar clf - .) | (cd /raid; tar xpf -) (cd /raid; tar clf /dev/null .)

Mbytes Secs MB/sec
Mbytes Secs MB/sec
/dev/evms/root n/a n/a n/a
3,155.264 230.105 13.712
Ext2 3,633.296 279.169 11.302
3,633.296 103.920 30.362
Ext3 3,666.104 345.616 9.129
3,666.104 109.318 28.863
Reiserfs 3,154.052 296.603 10.638
3,154.052 147.517 21.389
JFS 3,715.020 487.808 6.468
3,715.020 436.443 7.229
XFS 3,594.028 374.902 8.416
3,594.028 440.446 7.164









RAID-1 with 3 active partitions on 3 separate drives and Reiserfs
Production 3,160.948 410.171 7.706
3,160.948 167.462 18.876









Crypto tests with reiserfs
Cryptoloop aes-256 3,166.320 455.894 6.945
3,166.320 261.052 12.129
Non-crypto 3,167.320 279.456 11.334
3,167.320 168.394 18.809









Crypto RAID 3,163.276 464.436 6.811
3,163.276 274.186 11.537
Non-crypto RAID 3,163.280 299.248 10.571
3,163.280 152.643 20.723









Chunk size benchmarks
RAID-1, 3 drives, 3 partitions, 5 GB each, no crypto, Reiserfs
Chunk size in Kbytes Data size KB Min Sec MB/sec Data size KB Min Sec MB/sec
4 3,163.340 4 59.134 10.575 3,163.340 2 28.706 21.272
8 3,163.340 5 5.853 10.343 3,163.340 2 28.004 21.373
16 3,163.340 5 8.125 10.266 3,163.340 2 31.083 20.938
32 3,163.340 5 3.304 10.430 3,163.340 2 29.578 21.148
64 3,163.340 5 8.415 10.257 3,163.340 2 30.549 21.012
128 3,163.340 5 3.502 10.423 3,163.340 2 30.267 21.051
256 3,163.340 5 4.218 10.398 3,163.340 2 28.554 21.294
512 3,163.340 5 3.955 10.407 3,163.340 2 29.056 21.222
1024 3,163.340 5 4.293 10.396 3,163.340 2 30.934 20.958
2048 3,163.340 5 7.312 10.294 3,163.340 2 28.657 21.279
4096 3,163.340 5 6.345 10.326 3,163.340 2 28.825 21.255









RAID-1, 3 drives, 3 partitions, 5 GB each, AES-256, AES-256 crypto-loop, Reiserfs
Chunk size in Kbytes Data size KB Min Sec MB/sec Data size KB Min Sec MB/sec
4 3,163.340 7 48.970 6.745 3,163.340 4 38.924 11.341
8 3,163.340 7 47.595 6.765 3,163.340 4 38.833 11.345
16 3,163.340 7 51.299 6.712 3,163.340 4 41.183 11.250
32 3,163.340 7 47.582 6.765 3,163.340 4 43.161 11.172
64 3,163.340 7 46.750 6.777 3,163.340 4 39.533 11.317
128 3,163.340 7 43.660 6.823 3,163.340 4 38.319 11.366
256 3,163.340 8 39.772 6.086 3,163.340 4 40.467 11.279
512 3,163.340 7 47.528 6.766 3,163.340 4 38.840 11.345
1024 3,163.340 7 48.347 6.754 3,163.340 4 41.951 11.219
2048 3,163.340 7 49.618 6.736 3,163.340 4 39.962 11.299
4096 3,163.340 7 46.905 6.775 3,163.340 4 41.578 11.234

Elijah Laboratories Inc. logo Elijah Laboratories Inc. logo

© 2004 Elijah Laboratories Inc.
ALL RIGHTS RESERVED WORLDWIDE.

Web page design by Robert J. Brown.
Last modified: Tue Oct 19 03:42:45 EDT 2004

Signature