Category Archives: mirror server

Updated RPM Fusion’s mirrorlist servers

RPM Fusion’s mirrorlist server which are returning a list of (probably, hopefully) up to date mirrors (e.g., http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-rawhide&arch=x86_64) still have been running on CentOS5 and the old MirrorManager code base. It was running on two systems (DNS load balancing) and was not the most stable setup. Connecting from a country which has been recently added to the GeoIP database let to 100% CPU usage of the httpd process. Which let to a DOS after a few requests. I added a cron entry to restart the httpd server every hour, which seemed to help a bit, but it was a rather clumsy workaround.

It was clear that the two systems need to be updated to something newer and as the new MirrorManager2 code base can luckily handle the data format from the old MirrorManager code base it was possible to update the RPM Fusion mirrorlist servers without updating the MirrorManager back-end (yet).

From now on there are four CentOS7 systems answering the requests for mirrors.rpmfusion.org. As the new RPM Fusion infrastructure is also ansible based I added the ansible files from Fedora to the RPM Fusion infrastructure repository. I had to remove some parts but most ansible content could be reused.

When yum or dnf are now connecting to http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-rawhide&arch=x86_64 the answer is created by one of four CentOS7 systems running the latest MirrorManager2 code.

RPM Fusion also has the same mirrorlist access statistics like Fedora: http://mirrors.rpmfusion.org/statistics/.

I still need to update the back-end system which is only one system instead of six different system like in the Fedora infrastructure.

New external RAID

Today a new external RAID (connected via Fibre Channel) was attached to our mirror server. To create the filesystem (XFS) I used this command:

mkfs -t xfs -d su=64k -d sw=13 /dev/sdf1

According to https://raid.wiki.kernel.org/index.php/RAID_setup#XFS this are the correct options for 13 data disks (15 with RAID6 plus 1 hot spare) and a stripe size of 64k.

bcache Follow-Up

After using bcache for about three weeks it still works without any problems. I am serving around 700GB per day from the bcache device and looking at the munin results cache hits are averaging at about 12000 and cache misses are averaging at around 700. So, only looking at the statistics, it still seems to work very effectively for our setup.

RPM Fusion’s MirrorManager moved

After running RPM Fusion’s MirrorManager instance for many years on Fedora I moved it to a CentOS 6.4 VM. This was necessary because the MirrorManager installation was really ancient and still running from a modified git checkout I did many years ago. I expected that the biggest obstacle in this upgrade and move would be the database upgrade of MirrorManager as its schema has changed over the years. But I was fortunate and MirrorManager included all the necessary scripts to update the database (thanks Matt). Even from the ancient version I was running.

RPM Fusion’s MirrorManager instance uses postgresql to store its data and so I dumped the data on the one system to import it into the database on the new system. MirrorManager stores information about the files as pickled python data in the database and those columns were not possible to be imported due to problems with the character encoding. As this is data that is provided by the master mirror I just emptied those columns and after the first run MirrorManager recreated those informations.

Moving the MirrorManager instance to a VM means that, if you are running a RPM Fusion mirror, the crawler which checks if your mirror is up to date will now connect from another IP address (129.143.116.115) to your mirror. The data collected by MirrorManager’s crawler is then used to create http://mirrors.rpmfusion.org/mm/publiclist/ and the mirrorlist used by yum (http://mirrors.rpmfusion.org/mirrorlist?repo=free-fedora-updates-released-19&arch=x86_64). There are currently four systems serving as mirrors.rpmfusion.org

Looking at yesterday’s statistics (http://mirrors.rpmfusion.org/statistics/?date=2013-08-20) it seems there were about 400000 accesses per day to our mirrorlist servers.

bcache on Fedora 19

After having upgraded our mirror server from Fedora 17 to Fedora 19 two weeks ago I was curious to try out bcache. Knowing how important filesystem caching for a file server like ours is we always tried to have as much memory as “possible”. The current system has 128GB of memory and at least 90% are used as filesystem cache. So bcache sounds like a very good idea to provide another layer of caching for all the IOs we are doing. By chance I had an external RAID available with 12 x 1TB hard disc drives which I configured as a RAID6 and 4 x 128GB SSDs configured as a RAID10.

After modprobing the bcache kernel module and installing the necessary bcache-tools I created the bcache backing device and caching device like it is described here. I then created the filesystem like I did it with our previous RAIDs. For RAID6 with 12 hard disc drive and a RAID chunk size of 512KB I used mkfs.ext4 -b 4096 -E stride=128,stripe-width=1280 /dev/bcache0. Although I am unsure how useful these options are when using bcache.

So far it worked pretty flawlessly. To know what to expect from /dev/bcache0 I benchmarked it using bonnie++. I got 670MB/s for writing and 550MB/s for reading. Again, I am unsure how to interpret these values as bcache tries to detect sequential IO and bypasses the cache device for sequential IO larger than 4MB.

Anyway. I started copying my fedora and fedora-archive mirror to the bcache device and we are now serving those two mirrors (only about 4.1TB) from our bcache device.

I have created a munin plugin to monitor the usage of the bcache device and there are many cache hits (right now more than 25K) and some cache misses (about 1K). So it seems that it does what is supposed to do and the number of IOs directly hitting the hard disc drives is much lower than it would be:

I also increased the cutoff for sequential IO which should bypass the cache from 4MB to 64MB.

The user-space tools (bcache-tools) are not yet available in Fedora (as far as I can tell) but I found http://terjeros.fedorapeople.org/bcache-tools/ which I updated to the latest git: http://lisas.de/~adrian/bcache-tools/

Update: as requested the munin plugin: bcache

More mirror traffic analysis

I have updated the scripts which are using the mirrored project status information in our database to display even more information about what is going on on our mirror server. In addition to the overall traffic of the last 14 days, 12 months and all the years since we started to collect this data, the overall traffic is now broken down to transferred HTTP, FTP, RSYNC and other data (blue=other, red=http, green=rsync, yellow=ftp). The most traffic is generated by HTTP, followed by RSYNC and last (but not surprising) is FTP.

In addition to breakdown by traffic type I added an overview of the mirror size (in bytes and number files) at the bottom of the status page of each mirrored project. Looking at the status page of our apache mirror it is now possible to see the growth of the mirror since 2005. It started with 7GB in 2005 and has now reached almost 50GB at the end of 2012.

Adding the new functionality to the PHP scripts I had to change code I have written many years ago and unfortunately I must confess that this is embarrassingly bad code and it already hurts looking at it. Adding new functionality to it was even worse, but despite my urge to rewrite it I just added the new functionality which makes the code now even more unreadable.

New RAID

For our mirror server we now have a third RAID which is also used for the mirror data. The previous external RAIDs (12x1TB as RAID5 + hot spare) were reaching their limits and so additional 11x1TB as RAID6 in the remaining internal slots are a great help to reduce the load and usage of the existing disks. There are now roughly 30TB used for mirror data.

To create the filesystem on the new internal RAID I have used http://busybox.net/~aldot/mkfs_stride.html. With 11 disks, a RAID level of 6,  RAID chunk size of 512 KiB and number of filesystem blocks of 4KiB I get the following command to create my ext4 filesystem:

mkfs.ext4 -b 4096 -E stride=128,stripe-width=1152

I am now moving all the data from one of the external RAIDs to the new internal RAID because the older external RAID still uses ext3 and I would like to recreate the filesystem using the same parameter calculation as above. Once the filesystem has been re-created I will distribute our data evenly across the three RAIDs (and maybe also mirror a new project).

Update: After moving the data from one of the external RAIDs to the internal RAID the filesystem has been re-created with:

mkfs.ext4 -b 4096 -E stride=128,stripe-width=1280

Updated to Fedora 17

 

Mirror Server

Yesterday I upgraded our  mirror server to Fedora 17. After having neglected the system for some time it still ran Fedora 14. Fedora 14 was extremely stable and the uptime was almost 1 year. Such large uptimes are usually a sign of a lazy admin because with the frequency of kernel updates the system should have been rebooted much more often and Fedora 14 is now almost half a year EOL. The update to Fedora 17 is the first update I did not want make using yum because of the changes necessary for UsrMove. I burned the DVD (actually Martin did it) and even looked at the installation guide. In the installation guide it says:

Before upgrading to Fedora 17 you should first bring your current version up to date. However, it is not then necessary to upgrade to intermediate versions. For example, you can upgrade from Fedora 14 to Fedora 17 directly.

Great, I was already afraid I had to do two upgrades. After dumping the postgresql Database (I even thought about this) I rebooted using the DVD and it started to search for previous installations. It found a Fedora 14 installation and said that it cannot upgrade Fedora 14 to Fedora 17. Just as I expected it. Now Silvio was so nice to burn a Fedora 16 DVD and I started the Fedora 16 upgraded but this time the installer did not even offer the possibility to upgrade and the only possibility was a new installation. After using the shell the installer offers on another VT I found out that we have to many partitions. Not sure what the installer exactly does but it was not able to handle a separate partition for /var and /var/lib which we have been using. It was not able to find the RPM database and aborted the upgrade process. So I increased the size of the LV containing / and copied /var,/var/lib and /usr (because of UsrMove) to the / partition and finally the upgrade could start. After the upgrade finished I inserted the Fedora 17 DVD this upgraded finished without any problems.

After rebooting in the freshly upgraded Fedora 17 I saw that the upgraded to systemd did not went as smooth as is should have been. All service which were converted to systemd unit files were stopped and disabled. Only the jabber server was running (which is my package and has not been converted to systemd (but it will be for Fedora 18)). So I checked all the configuration files and started and enabled one service after another (has been a good systemd training).

After 6 hours most services were running again and the mirror server was happily serving files.

Notebook

Today I also upgraded my notebook from Fedora 16 to Fedora 17. Using the Fedora 17 DVD from above it upgraded the system without any obvious problems. After rebooting into Fedora 17 I inserted my notebook back into the docking station (two external monitors connected via DVI) and was shocked that the monitors were no longer detected. The gnome-shell process was using 150% of the CPU and the CPU temperature was around 98°C (usually around 55°C). So at first I panicked and wanted Fedora 16 back but then I found at that all I needed was an updated xorg-x11-drv-intel. After a yum update --enablerepo=updates-testing xorg-x11-drv-intel-2.19.0-5.fc17 everything was back as good as Fedora 16 (and better of course).

And The Winner Is

Fedora. Nobody expected anything else (of course).

The first one and a half days since the release of Fedora 9 we are maxing out our bandwidth again. Today we already pushed more than 5.5TB and it looks like we will get close to transmitting 7TB on one day. This is much more than during the last Ubuntu release.

With the help of munin I can again provide a nice bandwidth graph:

alt

The small dent, just after the start of the release, is due to the fact that I had to restart apache because of our cache drive. We are using a fast hard disk to reduce the load on our main RAID as cache, but it seems that it somehow cannot handle over a thousand simultaneous accesses and that is why I disabled that cache drive (which should have improved the situation and not worsened it).
I can also prove that the Fedora release is the reason for all the traffic:


Traffic Breakdown 2008-05-14



Traffic Breakdown 2008-05-13

Ubuntu Release

We always thought our mirror server is connected with 2 GBit/s (two times
an e1000 card using bonding mode=6), but the
current Ubuntu release proved that somewhere along the way to the
Internet there must still be something that limits us to 1 GBit/s. The
following diagram shows this pretty clearly:

alt

Now we only need to find out where and if it is something that we can fix or
if we need help from our provider.

Maybe we can fix it before the release of Fedora 9 so that we finally can
transmit more than 1 GBit/s.