The latest MirrorManager release (0.6.1) which is active since 2015-12-17 in Fedora’s infrastructure has a few additional features which provide insights into the mirror network usage.

The first is called statistics. It gives a daily overview what clients are requesting. It analysis the metalink and mirrorlist accesses and draws diagrams. Each time the local yum or dnf metadata has expired a new mirrorlist/metalink is requested which contains the ‘best’ mirrors for the client currently requesting the data. The current MirrorManager statistics implementation tries to display how often the different repositories are requested from which country for the available architectures:

In addition to the statistics where the clients are coming from and which files they are interested in the old code to draw a map of the location of all mirror servers has been re-enabled: maps

Another new visualization tries to track the propagation. The time the existing mirrors need to carry the latest bits. A script connects to all enabled mirrors and checks which repomd.xml file is currently available on the mirror. This is done for the development branch and all active branches. The script displays how many mirrors have the current repomd.xml file or if the mirror still has the  repomd.xml file from the previous push (or the push before) or if the file is even older: Propagation.

Another relevant change in Fedora’s MirrorManager is that it is no longer possible to enter FTP URLs. This is the first step to remove FTP based URLs  as FTP based mirrors are often, depending on the network topology, difficult to connect to, other protocols (HTTP, RSYNC) are better suited and more mirror server are not providing FTP anyway.

The latest MirrorManager release (0.6.1) which is active since 2015-12-17 in Fedora’s infrastructure has a few additional features which provide insights into the mirror network usage.

The first is called statistics. It gives a daily overview what clients are requesting. It analysis the metalink and mirrorlist accesses and draws diagrams. Each time the local yum or dnf metadata has expired a new mirrorlist/metalink is requested which contains the ‘best’ mirrors for the client currently requesting the data. The current MirrorManager statistics implementation tries to display how often the different repositories are requested from which country for the available architectures:

In addition to the statistics where the clients are coming from and which files they are interested in the old code to draw a map of the location of all mirror servers has been re-enabled: maps

Another new visualization tries to track the propagation. The time the existing mirrors need to carry the latest bits. A script connects to all enabled mirrors and checks which repomd.xml file is currently available on the mirror. This is done for the development branch and all active branches. The script displays how many mirrors have the current repomd.xml file or if the mirror still has the  repomd.xml file from the previous push (or the push before) or if the file is even older: Propagation.

Another relevant change in Fedora’s MirrorManager is that it is no longer possible to enter FTP URLs. This is the first step to remove FTP based URLs  as FTP based mirrors are often, depending on the network topology, difficult to connect to, other protocols (HTTP, RSYNC) are better suited and more mirror server are not providing FTP anyway.

For some reason the support for init.d and thereby userinit.d has been removed from CyanogenMod starting with CM12. Unfortunately it is not easy to re-activate the functionality, even more so if you want the change to survive future CM updates.

So I decided to create a trivial app that will simply execute run-parts on the /data/local/userinit.d directory when the phone completes booting to get the good old userinit.d back. To clone the git repository run:

git clone https://lisas.de/~alex/runuserinit.git

Find more details on the repository contents here.

After installation you will have to start RunUserinit once and hit the button.  When asked whether RunUserinit should be allowed to use root privileges accept that and make the setting permanent. Finally sshd will run automatically again, whenever my phone requires a reboot…

In order to get maximum performance with the newly setup RAID, I added some udev rules (by placing them in /etc/udev/rules.d/83-md-tune.rules) to increase caching. The file has one entry for each of the involved disks (sdX) to adjust the read-ahead:

ACTION=="add", KERNEL=="sdX", ATTR{bdi/read_ahead_kb}="6144"

And one for the mdX device to adjust the read-ahead as well as the size of the stripe cache:

ACTION=="add", KERNEL=="mdX", ATTR{bdi/read_ahead_kb}="24576", ATTR{md/stripe_cache_size}="8192"

With these settings dd yields the following results when copying a large file:

$ sync; echo 3 > /proc/sys/vm/drop_caches $ dd if=largefile of=/dev/null bs=16M 20733648232 bytes (21 GB) copied, 60.4592 s, 343 MB/s

Which is nice – and rather pointless as the clients connect with 1G links so they see only one third of that performance at best… Note that the caches will cost extra kernel memory, so if you’re low on RAM you might want to opt for lower cache sizes instead.

Update: I forgot to mention that I also switched from the deadline (which is the default for current Ubuntu systems when installed as servers) to the cfq I/O scheduler as the test results from this article suggest that it is the optimal scheduler for RAID Level 5 no matter whether it is HW or SW controlled.

With my Linux SW RAID screaming “grow me!” for quite a while now, I finally brought myself to replace the old 2TB disks with new 6TB ones (RAID 5 with 4 disks). While such a disk-upgrade has to be performed regularly, the frequency is so low that it is hard to remember the details when you finally get to do it again. Unfortunately the “official” method (replace & resync disk-by-disk and then grow the md and the filesystem) as suggested in the Linux RAID Wiki has a few drawbacks:

  • you have no backup in case of failures during the 4 RAID rebuilds
  • you continue to operate on the old filesystem, in my case where the RAID has been full for quite a while you will inherit quite a bit of unnecessary fragmentation – and you cannot switch nor re-tune the filesystem which could make sense for a significantly bigger RAID

Luckily Adrian reminded me of mdadm’s missing parameter, so I could perform this alternate RAID upgrade which I’ll detail below (should come in handy for my next upgrade).

  1. After running tests on all new disks (a full write and a long S.M.A.R.T. self test) I copied all data from the RAID to one of the new disks. As the server was still in operation I opted for the tar|tar approach with a final rsync to complete the replication (for details see this stackexchange thread) to copy the data. Note that if you don’t have a spare port you will have to fail one of your existing drives to use it’s port for the new disk instead.
  2. Now remount the filesystem so that the server uses the new copy (will require stopping and restarting services that were using the relevant mount point).
  3. Replace all old drives and replace them with the remaining new disks, create a new RAID but do not include the drive on which you have copied the contents of the RAID – note that you will have to replace the blue variables with the real identifiers – as you can see /dev/sd4 the one where the currently mounted copy of the data resides:
    mdadm --create /dev/mdx --chunk=256 --level=5 --raid-devices=4 /dev/sd1 /dev/sd2 /dev/sd3 missing
    I went with full drives instead of partitions and also for a smaller than default chunk size (512k) as I wanted to use XFS and the current implementation doesn’t support chunk sizes bigger than 256k (there seems to be a dispute on the usefulness of the new 512k default in mdadm).
  4. Update the DEVICE information in mdadm.conf if necessary, remove the old ARRAY entry and append one for the new drive with this command:
    mdadm --detail --scan >> /etc/mdadm/mdadm.conf
  5. Now create the filesystem on the RAID device, to match my chunksize I used this command to create it:
    mkfs -t xfs -d su=256k -d sw=3 /dev/mdx
  6. Mount the new RAID to a temporary location and repeat the replication as outlined in the first step.
  7. Remount to the new RAID device and add the now unmounted disk to the new array with:
    mdadm --manage /dev/mdx --add /dev/sd4

The migration is complete with the final RAID sync that starts automatically after adding the drive. Except for the remounts, the system can stay operational during the complete procedure (thanks to SATA hot-plugging) and as the old disks stay untouched you always have a backup available.

I finally upgraded my PowerStation from Fedora 18 to Fedora 21. The upgrade went pretty smooth and was not much more than:

$ yum --releasever=19 --exclude=yaboot --exclude=kernel distro-sync $ yum --releasever=20 --exclude=yaboot --exclude=kernel distro-sync $ yum --releasever=21 --exclude=yaboot --exclude=kernel distro-sync

As I was doing the upgrade without console access I did not want to change the bootloader from yaboot to grub2 and I also excluded the kernel. Once I have console access I will also upgrade those packages.

The only difficulty was upgrading from Fedora 20 to Fedora 21 because 32bit packages were dropped from ppc and I was not sure if the system would still boot after removing all 32bit packages (yum remove *ppc). But it just worked and now I have an up to date 64bit ppc Fedora 21 system.