In order to get maximum performance with the newly setup RAID, I added some udev rules (by placing them in /etc/udev/rules.d/83-md-tune.rules) to increase caching. The file has one entry for each of the involved disks (sdX) to adjust the read-ahead:
ACTION=="add", KERNEL=="sdX", ATTR{bdi/read_ahead_kb}="6144"
And one for the mdX device to adjust the read-ahead as well as the size of the stripe cache:
ACTION=="add", KERNEL=="mdX", ATTR{bdi/read_ahead_kb}="24576", ATTR{md/stripe_cache_size}="8192"
With these settings dd yields the following results when copying a large file:
$ sync; echo 3 > /proc/sys/vm/drop_caches $ dd if=largefile of=/dev/null bs=16M 20733648232 bytes (21 GB) copied, 60.4592 s, 343 MB/s
Which is nice – and rather pointless as the clients connect with 1G links so they see only one third of that performance at best… Note that the caches will cost extra kernel memory, so if you’re low on RAM you might want to opt for lower cache sizes instead.
Update: I forgot to mention that I also switched from the deadline (which is the default for current Ubuntu systems when installed as servers) to the cfq I/O scheduler as the test results from this article suggest that it is the optimal scheduler for RAID Level 5 no matter whether it is HW or SW controlled.