In an earlier post about giving up Kubernetes, I wrote that my server is a plain old virtual machine pet. While this works pretty well for a static web server use case, I wanted to explore ways how to automate this blog.

This is a story about how I failed. I am writing this because I did my best to make it work, but it turned out its impossible

At the same time it has a some good hints about user systemd, quadlets, rootless podman and what I managed to get working.

The problem

When it comes to your own stuff, overengineering is always the preferred option, isn’t it? My problem was fairly simple. After finishing a Markdown document in neovim, I have to run hugo, install it, or find a (distrobox)[https://distrobox.it/] container with it installed. Then, I need to push the source and built content to the Codeberg repository. After that, I need to log into the server, switch to the user, and run git pull to finally publish the changes.

Except for the git, this is a pretty 90s approach. And as a popular saying goes

Developer will rather spent 3 days writing script* rather than doing the same 3 minutes of manual work twice.

Which is again pretty 90s, isn’t it? Nowadays developers would spent 3 months deploying a multi-cloud multi-region k8s cluster with Terraform, Helm, ArgoCD and dozens of microservices than writing a script.

The idea

The solution exists and is called CI/CD. Because Codeberg itself is a public instance of a software called Forgejo, the most natural way to do it is with actions powered by forgejo-runner. There are others who successfully did that.

Here I wanted to mention that both forgejo-runner and codeberg are wonderful tools, which has nothing to do with why I failed.

So the idea is to build roughly this.

  1. On every push, Hugo Build translates the content.
  2. Things will be packaged into an OCI container and pushed to an internal repository.
  3. The blog itself will be an OCI container running behind Caddy.
  4. The whole server will be behind the front-end Caddy, which will be in charge of logging, TLS, and certificate management.

It’s pretty standard, isn’t it? Given the state of the industry in 2025, it doesn’t sound too complicated.

The container curse

I must say that I like containers. When used properly, they’re amazing. For example the openSUSE cnf. This is a Rust tool built inside ubuntu-latest image. In order to test that it works on openSUSE itself, the resulting binary is mounted to a tested openSUSE Tumbleweed container. This allows for a fast integration test using a real system as well as testing of all supported scenarios like zypper, dnf5 or dnf.

What I don’t like is Docker, specifically dockerd, and the fact that it runs as root. While I understand why it is like that. And after this saga I understand it more than ever. Its simplicity is what made it so popular in the first place. Docker like dockerd is a great solution for developers. Not for my server. This brings me to the real curse of containers: rootless ones.

Specifically, I dislike dockerd because it runs as root. I understand why it is like that, though. After this saga, I understand it more than ever. Its simplicity is what made it popular in the first place. Docker, the all might dockerd, is a great solution for developers. But not for my server. This brings me to the real curse of containers: rootless ones.

It’s 2025, and I must say, you can really go far with them. Most of their limitations are in the form of /etc/subuid allocations, and sometimes one encounters a strange permission error. However, it’s much better than it was in 2020 when I was experimenting with them.

How to setup a rootless container

1. create dedicated user

Create a new dedicated system user; systemd-sysuser is probably the best option nowadays.

> cat /etc/sysusers.d/forgejo.conf
#Type Name    ID   GECOS               Home directory  Shell
u     actions -    "forgejo actions"   /home/actions   /sbin/nologin
r     -       1000-1999
m     actions systemd-journal

For rootless Podman containers, it is best to start an user instance via systemd so that the containers will be managed by it like a regular services. Note that systemd-sysuser does not create a home directory.

# Create new user
> systemd-sysusers /etc/sysusers.d/forgejo.conf
# Run systemd's user instance for a newly user
> systemctl start user@1999.service

Quadlet is a Red Hat technology that integrates Podman with a systemd. By writing a .container file that describes all of its properties, quadlet generates an appropriate systemd unit.

This means that forgejo runner itself runs as a container. To do its job, it needs a “docker” socket. In this case it is podman socket of a actions user mounted to the proper place. This allows runner to pull and execute other containers without having a chance to obtain root privileges.

This setup mounts /home/actions/runner into the runner container, ensuring cache is saved between runs.

[Unit]
Description=Forgejo runner

[Container]
Image=code.forgejo.org/forgejo/runner:11
ContainerName=forgejo-runner
User=0
Group=0
Environment=DOCKER_HOST=unix:///var/run/docker.sock
Exec=forgejo-runner --config /data/config/config.yml daemon
Volume=%h/runner:/data:Z
Volume=/run/user/%U/podman/podman.sock:/var/run/docker.sock:rw
Volume=/etc/localtime:/etc/localtime:ro
AutoUpdate=registry

[Service]
Restart=on-failure
RestartSec=5s

[Install]
WantedBy=default.target
# Check actions user services
> systemctl --machine actions@.host --user status forgejo-runner
● forgejo-runner.service - Forgejo runner
     Loaded: loaded (/home/actions/.config/containers/systemd/forgejo-runner.container; generated)
     Active: active (running) since Thu 2025-10-23 18:37:46 CEST; 1h 35min ago

SElinux intermezzo

In the grand scheme of things, this was really a minor annoyance. I got a “permission denied” on /var/run/docker.sock inside the container. The problem was that openSUSE Leap default policy don’t allow for container_t to use connectto.

$ ausearch -m AVC -ts recent

type=AVC msg=audit(1761167553.886:1210): avc:  denied  { connectto } for
pid=34382 comm="forgejo-runner" path="/run/user/1999/podman/podman.sock"
scontext=system_u:system_r:container_t:s0:c167,c401
tcontext=unconfined_u:unconfined_r:container_runtime_t:s0-s0:c0.c1023
tclass=unix_stream_socket permissive=0
> ausearch -m AVC -ts recent | audit2allow -M forgejorunner
> semodule -i forgejorunner.pp
module forgejorunner 1.0;

require {
        type container_t;
        type container_runtime_t;
        class unix_stream_socket connectto;
}

#============= container_t ==============
allow container_t container_runtime_t:unix_stream_socket connectto;

Running own registry

So far I was making a good progress. The runner was registered against Codeberg and was running a printf just fine. The second problem was decifing to store the containers with blog. I did not want to store them in any external registry. They are private artifact of the server itself, so why expose them?

As I got confident with systemd, quadlets and container, I deployed another container file under registry user.

[Unit]
Description=Zot containers registry

[Container]
Image=ghcr.io/project-zot/zot-linux-amd64:latest
ContainerName=zotregistry
User=0
Group=0
Exec=serve /data/config/config.json
Volume=%h/registry:/data:Z
Volume=/etc/localtime:/etc/localtime:ro
PublishPort=127.0.0.1:2999:2999
AutoUpdate=registry

[Service]
Restart=on-failure
RestartSec=30s

[Install]
WantedBy=default.target

Long story short. It is a good idea that does not work. In a hindsight, it makes a sense. You can’t connect two rootless Podmans running under a different user accounts. Not without doing their networking equal to host. Which kinda makes no sense. Why go through all the troubles isolating things and then let everything bind everywhere?

One simple alternative would be to expose the registry under a public name, such as registry.vyskocil.me. The container was exposed to the (localhost)host. However, I never wanted to deal with yet another public service. And the intent was always to keep the registry internal to the VM.

The easiest solution was to move registry under actions user and to define a Podman network.

> cat /home/actions/.config/containers/systemd/ci-net.network
[Network]
Driver=bridge

Add Network=ci-net.network to both container files. And a started containers must have an access to the same network. As this is managed by systemd it has a prefix.

container:
  network: "systemd-ci-net"

The dead end

I must admit that, during this phase, I thought all the difficult problems had been solved. Permissions, SELinux, and networking were all configured and set up. However, item 3 of my plan turned out to be a show stopper.

You cannot build a container. Inside a container. Period! Altough there seems to be workarounds like gcr.io/kaniko-project/executor:latest they always require elevated privileges like access to /dev/fuse for overlayfs and possibly the --priviledged flag to gain more capabilities. I was not comfortable doing this. The purpose of using separate user accounts and a rootless Podman containers was to make the system more secure and less susceptible to breaches.

Sync like its 1996

So, how did I solve the problem? rsync! Since runner runs on my own host, I can mount the host’s file system write the results there.

container:
  image: docker.io/library/node:22-bookworm
  options: >-
    -v /home/actions/runner/public:/public:Z

  steps:
  - name: Trigger a sync
    run: touch /public/.ready-for-sync

Or rather 2010?

Then how are we going to sync files? systemd comes to the rescue. The easiect way is to have unit watching .ready-for-sync and run the rsync script. It runs as a root as it needs to adapt the privileges once more. However systemd itself is great because t provides a lot of tunables that limit the permissions to an absolute minimum.

[Service]
# Hardening
ProtectSystem=strict
ProtectHome=read-only
BindReadOnlyPaths=/home/actions/runner/public
ReadWritePaths=/run/lock/ /srv/www...
InaccessiblePaths=/root
PrivateTmp=yes
PrivateMounts=yes
NoNewPrivileges=yes
ProtectControlGroups=yes
...

Conclusion

Altough I didn’t reach my original goal, I can still call this journey as a success. At least I can uninstall git from VM and hugo from laptop and can enjoy at least a partially modern, fully automated setup.

Codeberg/Forgejo and Forgejo-runner are amazing pieces of a software. I was surprised that I could deploy and run the software successfully, even with rather unusual setup.

And rootless pPdman containers in 2025? They are super cool. The integration to systemd via quadlets is amazing. I feel good knowing they are limited by real Unix privileges and not by namespace/container magic.

However as you give up all the privileges, you can’t build a new OCI container.

Last remark

This is dedicated to all the script kiddies bombing my sshd with ubuntu, admin, wordpress and other non sense accounts. Good luck with actions!

> getent passwd actions
actions:x:1999:1999:forgejo actions:/home/actions:/sbin/nologin