For some reason the support for init.d and thereby userinit.d has been removed from CyanogenMod starting with CM12. Unfortunately it is not easy to re-activate the functionality, even more so if you want the change to survive future CM updates.

So I decided to create a trivial app that will simply execute run-parts on the /data/local/userinit.d directory when the phone completes booting to get the good old userinit.d back. To clone the git repository run:

git clone https://lisas.de/~alex/runuserinit.git

Find more details on the repository contents here.

After installation you will have to start RunUserinit once and hit the button.  When asked whether RunUserinit should be allowed to use root privileges accept that and make the setting permanent. Finally sshd will run automatically again, whenever my phone requires a reboot…

In order to get maximum performance with the newly setup RAID, I added some udev rules (by placing them in /etc/udev/rules.d/83-md-tune.rules) to increase caching. The file has one entry for each of the involved disks (sdX) to adjust the read-ahead:

ACTION=="add", KERNEL=="sdX", ATTR{bdi/read_ahead_kb}="6144"

And one for the mdX device to adjust the read-ahead as well as the size of the stripe cache:

ACTION=="add", KERNEL=="mdX", ATTR{bdi/read_ahead_kb}="24576", ATTR{md/stripe_cache_size}="8192"

With these settings dd yields the following results when copying a large file:

$ sync; echo 3 > /proc/sys/vm/drop_caches
$ dd if=largefile of=/dev/null bs=16M
20733648232 bytes (21 GB) copied, 60.4592 s, 343 MB/s

Which is nice – and rather pointless as the clients connect with 1G links so they see only one third of that performance at best… Note that the caches will cost extra kernel memory, so if you’re low on RAM you might want to opt for lower cache sizes instead.

Update: I forgot to mention that I also switched from the deadline (which is the default for current Ubuntu systems when installed as servers) to the cfq I/O scheduler as the test results from this article suggest that it is the optimal scheduler for RAID Level 5 no matter whether it is HW or SW controlled.

With my Linux SW RAID screaming “grow me!” for quite a while now, I finally brought myself to replace the old 2TB disks with new 6TB ones (RAID 5 with 4 disks). While such a disk-upgrade has to be performed regularly, the frequency is so low that it is hard to remember the details when you finally get to do it again. Unfortunately the “official” method (replace & resync disk-by-disk and then grow the md and the filesystem) as suggested in the Linux RAID Wiki has a few drawbacks:

  • you have no backup in case of failures during the 4 RAID rebuilds
  • you continue to operate on the old filesystem, in my case where the RAID has been full for quite a while you will inherit quite a bit of unnecessary fragmentation – and you cannot switch nor re-tune the filesystem which could make sense for a significantly bigger RAID

Luckily Adrian reminded me of mdadm’s missing parameter, so I could perform this alternate RAID upgrade which I’ll detail below (should come in handy for my next upgrade).

  1. After running tests on all new disks (a full write and a long S.M.A.R.T. self test) I copied all data from the RAID to one of the new disks. As the server was still in operation I opted for the tar|tar approach with a final rsync to complete the replication (for details see this stackexchange thread) to copy the data. Note that if you don’t have a spare port you will have to fail one of your existing drives to use it’s port for the new disk instead.
  2. Now remount the filesystem so that the server uses the new copy (will require stopping and restarting services that were using the relevant mount point).
  3. Replace all old drives and replace them with the remaining new disks, create a new RAID but do not include the drive on which you have copied the contents of the RAID – note that you will have to replace the blue variables with the real identifiers – as you can see /dev/sd4 the one where the currently mounted copy of the data resides:
    mdadm --create /dev/mdx --chunk=256 --level=5 --raid-devices=4 /dev/sd1 /dev/sd2 /dev/sd3 missing
    I went with full drives instead of partitions and also for a smaller than default chunk size (512k) as I wanted to use XFS and the current implementation doesn’t support chunk sizes bigger than 256k (there seems to be a dispute on the usefulness of the new 512k default in mdadm).
  4. Update the DEVICE information in mdadm.conf if necessary, remove the old ARRAY entry and append one for the new drive with this command:
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
  5. Now create the filesystem on the RAID device, to match my chunksize I used this command to create it:
    mkfs -t xfs -d su=256k -d sw=3 /dev/mdx
  6. Mount the new RAID to a temporary location and repeat the replication as outlined in the first step.
  7. Remount to the new RAID device and add the now unmounted disk to the new array with:
    mdadm --manage /dev/mdx --add /dev/sd4

The migration is complete with the final RAID sync that starts automatically after adding the drive. Except for the remounts, the system can stay operational during the complete procedure (thanks to SATA hot-plugging) and as the old disks stay untouched you always have a backup available.

I had just finished tuning my ownCloud sync setup, when – after years of smooth, unharmed operation despite numerous cement-terminated falls – the better parts of my N9’s gorilla glass finally decide to break apart as the phone left the the bike mount mid-ride. It seems the mount broke due to modifications I made as it kept pressing buttons unintentionally.

glass

Hopefully I will be able to get my hands on a another (retired) N9 next week so I can use  that phone’s display to replace the broken one, which is nice as I wouldn’t know which new phone I would by right now, for some reason the Ubuntu Edge I ordered never shipped.

This way I can continue using SyncEvolution with my little script to sync with ownCloud which uses some MeeGo D-Bus magic to pop-up a short message informing me when the sync is complete. As I failed at ash arithmetic the script feels a little clumsy, but it seems to do what it should.