After using Podman a lot during the last weeks while adding checkpoint/restore support to Podman I was finally ready to use containers in production on our mirror server. We were still running the ownCloud version that came via RPMs in Fedora 27 and it seems like many people have moved on to Nextcloud from tarballs.
One of the main reason to finally use containers is Podman’s daemonless approach.
The first challenge while moving from ownCloud 9.1.5 to Nextcloud 14 is the actual upgrade. To make sure it works I first made a copy of all the uploaded files and of the database and did a test upgrade yesterday using a CentOS 7 VM. With PHP 7 from Software Collections it was not a real problem. It took some time, but it worked. I used the included upgrade utility to upgrade from ownCloud 9 to Nextcloud 10, to Nextcloud 11, to Nextcloud 12, to Nextcloud 13, to Nextcloud 14. Lots of upgrades. Once I verified that everything was still functional I did it once more, but this time I used the real data and disabled access to our ownCloud instance.
The next step was to start the container. I decided to use the nextcloud:fpm container as I was planning to use the existing web server to proxy the requests. The one thing which makes using containers on our mirror server a bit difficult, is that it is not possible to use any iptables NAT rules. At some point there are just too many network connections in the NAT table from all the clients connecting to our mirror server that it used to drop network connections. This is a problem which is probably fixed since a long time, but it used to be a problem and I try to avoid it. That is why my Nextcloud container is using the host network namespace:
podman run --name nextcloud-fpm -d --net host -v /home/containers/nextcloud/html:/var/www/html -v /home/containers/nextcloud/apps:/var/www/html/custom_apps -v /home/containers/nextcloud/config:/var/www/html/config -v /home/containers/nextcloud/data:/var/www/html/data nextcloud:fpm
I was reusing my existing config.php in which the connection to PostgreSQL on 127.0.0.1 was still configured.
Once the container was running I just had to add the proxy rules to the Apache HTTP Server and it should have been ready. Unfortunately this was not as easy as I hoped it to be. All the documentation I found is about using the Nextcloud FPM container with NGINX. I found nothing about Apache’s HTTPD. The following lines required most of the time of the whole upgrade to Nextcloud project:
<FilesMatch .php.*> SetHandler proxy:fcgi://127.0.0.1:9000/ ProxyFCGISetEnvIf "reqenv('REQUEST_URI') =~ m|(/owncloud/)(.*)$|" SCRIPT_FILENAME "/var/www/html/$2" ProxyFCGISetEnvIf "reqenv('REQUEST_URI') =~ m|^(.+.php)(.*)$|" PATH_INFO "$2" </FilesMatch>
I hope these lines are actually correct, but so far all clients connecting to it seem to be happy. To have the Nextcloud container automatically start on system startup I based my systemd podman service file on the one from the Intro to Podman article.
[Unit] Description=Custom Nextcloud Podman Container After=network.target [Service] Type=simple TimeoutStartSec=5m ExecStartPre=-/usr/bin/podman rm nextcloud-fpm ExecStart=/usr/bin/podman run --name nextcloud-fpm --net host -v /home/containers/nextcloud/html:/var/www/html -v /home/containers/nextcloud/apps:/var/www/html/custom_apps -v /home/containers/nextcloud/config:/var/www/html/config -v /home/containers/nextcloud/data:/var/www/html/data nextcloud:fpm ExecReload=/usr/bin/podman stop nextcloud-fpm ExecReload=/usr/bin/podman rm nextcloud-fpm ExecStop=/usr/bin/podman stop nextcloud-fpm Restart=always RestartSec=30 [Install] WantedBy=multi-user.target