RAID Tuning

In order to get maximum performance with the newly setup RAID, I added some udev rules (by placing them in /etc/udev/rules.d/83-md-tune.rules) to increase caching. The file has one entry for each of the involved disks (sdX) to adjust the read-ahead:

ACTION=="add", KERNEL=="sdX", ATTR{bdi/read_ahead_kb}="6144"

And one for the mdX device to adjust the read-ahead as well as the size of the stripe cache:

ACTION=="add", KERNEL=="mdX", ATTR{bdi/read_ahead_kb}="24576", ATTR{md/stripe_cache_size}="8192"

With these settings dd yields the following results when copying a large file:

$ sync; echo 3 > /proc/sys/vm/drop_caches
$ dd if=largefile of=/dev/null bs=16M
20733648232 bytes (21 GB) copied, 60.4592 s, 343 MB/s

Which is nice – and rather pointless as the clients connect with 1G links so they see only one third of that performance at best… Note that the caches will cost extra kernel memory, so if you’re low on RAM you might want to opt for lower cache sizes instead.

Update: I forgot to mention that I also switched from the deadline (which is the default for current Ubuntu systems when installed as servers) to the cfq I/O scheduler as the test results from this article suggest that it is the optimal scheduler for RAID Level 5 no matter whether it is HW or SW controlled.


  1. You may also want to point out that apparently _some_ RAID coltorlners may cause corruption of sorts. I’m no kernel nerd so I can’t provide specifics, but this is by no means a saviour for _everyone_.

  2. While the post doesn’t exactly cover “hard” vs. “soft” RAID, you are right – hardware RAID controllers introduce their own set of issues. One of them being reliability: In the many servers I’ve worked with over the years – aside of the most likely candidates power supply and drives – the electronics I’ve seen die most often are in fact NICs and RAID controllers. Sometimes it is really hard to get compatible replacements to get the affected RAID back up – you will never have to worry about this with an SW RAID of course.

Leave a Reply

Your email address will not be published. Required fields are marked *