Podman
Podman is an alternative to Docker, providing a similar interface. It supports rootless containers and a shim service for docker-compose.
Installation
Install the podman package. Additionally if you want to build container images look at Buildah.
For container networking, install cni-plugins or netavark since v4.0.
If you want to replace Docker, one can install podman-docker to mimic the docker binary along with man pages.
Unlike Docker, Podman does not require a daemon, but there is one providing an API for services like cockpit via cockpit-podman.
By default, it is only possible to run Podman containers as root. See #Rootless Podman to set up running containers as a non-root user.
Configuration
Configuration files for configuring how containers behave are located at /usr/share/containers/
. You must copy necessary files to /etc/containers
before editing. To configure the network bridge interface used by Podman, see /etc/cni/net.d/87-podman.conflist
.
Registries
By default, no container image registries are configured in Arch Linux [1]. This means unqualified searches like podman search httpd
will not work. To make Podman behave like Docker, configure containers-registries.conf(5):
/etc/containers/registries.conf
unqualified-search-registries = ["docker.io"]
Rootless Podman
CONFIG_USER_NS_UNPRIVILEGED
) which has some serious security implications, see Security#Sandboxing applications for details.By default, only root
is allowed to run containers (or namespaces in kernelspeak). Running rootless Podman improves security as an attacker will not have root privileges over your system, and also allows multiple unprivileged users to run containers on the same machine. See also podman(1) § Rootless mode.
Additional dependencies
The slirp4netns package is installed as a dependency to run Podman in a rootless environment.
If Podman uses the netavark network backend (see containers.conf(5)) then it is required to install aardvark-dns to have name resolution in rootless containers.
Enable native rootless overlays
Previously, it was necessary to use the fuse-overlayfs package for FUSE overlay mounts in a rootless environment. However, modern versions of Podman and Linux kernel support native rootless overlays, which yields better performance. To migrate from fuse-overlayfs, run
$ podman system reset
This command will unfortunately delete all pulled containers. Also make sure that Podman uses overlay
driver and that the mount_program
parameter is not defined in containers-storage.conf(5). It might also be required to follow instructions from Docker#Enable native overlay diff engine.
To verify that native rootless overlays are enabled, run
$ podman info | grep -i overlay
It should show graphDriverName: overlay
and Native Overlay Diff: "true"
.
Enable kernel.unprivileged_userns_clone
First, check the value of kernel.unprivileged_userns_clone
by running:
$ sysctl kernel.unprivileged_userns_clone
If it is currently set to 0
, enable it by setting 1
via sysctl or a kernel parameter.
Set subuid and subgid
In order for users to run rootless Podman, a subuid(5) and subgid(5) configuration entry must exist for each user that wants to use it. New users created using useradd(8) have these entries by default.
- Users created prior to shadow
4.11.1-3
do not have entries in/etc/subuid
and/etc/subgid
by default. An entry can be created for them using the usermod(8) command or by manually modifying the files.
- The following command enables the
username
user and group to run Podman containers (or other types of containers in that case). It allocates a given range of UIDs and GIDs to the given user and group. # usermod --add-subuids 100000-165535 --add-subgids 100000-165535 username
- The above range for the user
username
may already be taken by another user as it defines the default range for the first user on the system. If in doubt, first consult the/etc/subuid
and/etc/subgid
files to find the already reserved ranges.
- Many images require 65536 uids / gids for mapping (notably the base busybox and alpine images). It is recommended that you allocate at least that many uids / gids for each user to maximize compatibility with docker.
Workaround for users managed by homed
Homed does not seem to allocate gid and uid entries to its users. To do this manually, run:
# usermod --add-subuids 524288:65536 --add-subgids 524288:65536 username
Or simply edit the following configuration files as root and add these lines
/etc/subuid
username:524288:65536
/etc/subgid
username:524288:65536
This allocates uid and gid range 524288-589823
to the username
user. If these ranges are already taken by other users, you need to shift/adjust the ranges accordingly.
You might need to reboot to reflect the changes.
- This is a workaround only, Podman does not seem to support homed officially.
- This is a known issue of systemd-homed.
- Using Docker seems to work (adding the user to the
docker
group, but it has its own security implications).
Propagate changes to subuid and subgid
Rootless Podman uses a pause process to keep the unprivileged namespaces alive. This prevents any change to the /etc/subuid
and /etc/subgid
files from being propagated to the rootless containers while the pause process is running. For these changes to be propagated it is necessary to run:
$ podman system migrate
After this, the user/group specified in the above files is able to start and run Podman containers.
Add SYS_CHROOT capabilities (Optional)
Starting at the 4.4 release, some previously default capabilities were dropped, including SYS_CHROOT
(explained in an official blog post). This affects containers which use chroot (like archlinux:base) and thus pacman operations fail within the container (i.e. installing packages which execute post-install scripts). You can identify such issues if when building with podman you get errors like below during build:
... could not change the root directory (Operation not permitted) error: command failed to execute correctly ...
To resolve this, edit /etc/containers/containers.conf
and add the SYS_CHROOT
to the list:
/etc/containers/containers.conf
default_capabilities = [ "CHOWN", "DAC_OVERRIDE", "FOWNER", "FSETID", "KILL", "NET_BIND_SERVICE", "SETFCAP", "SETGID", "SETPCAP", "SETUID", "SYS_CHROOT", ]
You can also do it from the command line temporarily with --cap-add sys_chroot
when you execute podman-build(1).
Storage
The configuration for how and where container images and instances are stored takes place in /etc/containers/storage.conf
.
$XDG_CONFIG_HOME/containers/storage.conf
on a per-user basis.Set the driver
according to the filesystem in use for the storage location (see containers-storage.conf(5) § STORAGE_TABLE).
Foreign architectures
Podman is able to run images built for different CPU architecture than the host using the Wikipedia:binfmt_misc system.
To enable it, install qemu-user-static and qemu-user-static-binfmt.
systemd comes with the systemd-binfmt.service
service which should enable new rules.
Verify that binfmt rules have been added:
$ ls /proc/sys/fs/binfmt_misc
DOSWin qemu-cris qemu-ppc qemu-sh4eb status qemu-aarch64 qemu-m68k qemu-ppc64 qemu-sparc qemu-alpha qemu-microblaze qemu-riscv64 qemu-sparc32plus qemu-arm qemu-mips qemu-s390x qemu-sparc64 qemu-armeb qemu-mipsel qemu-sh4 register
Podman should now be able to run foreign architecture images. Most commands use the foreign architecture when --arch
option is passed.
Example:
# podman run --arch arm64 'docker.io/alpine:latest' arch
aarch64
Docker Compose
Podman has a compose subcommand which is a thin wrapper around a compose provider, either docker-compose or podman-compose. If both are installed, docker-compose takes precedence. You can override this using the PODMAN_COMPOSE_PROVIDER
environment variable.
If you want to use docker-compose, you will need to enable the podman.socket
user unit for that user.
This is not required when using podman-compose as it will use podman directly.
To get hostname resolution between running containers install aardvark-dns.
DOCKER_BUILDKIT=0
environment variable.NVIDIA GPUs
NVIDIA Container Toolkit provides container runtime for NVIDIA GPUs. Install the nvidia-container-toolkit package. It contains a pacman hook that generates the CDI specification for your GPU and saves it in /etc/cdi/nvidia.yaml
.
To be able to run rootless containers with podman, the no-cgroups
setting must be set to true
in /etc/nvidia-container-runtime/config.toml
:
# nvidia-ctk config --set nvidia-container-cli.no-cgroups --in-place
Test the setup:
$ podman run --rm --gpus all archlinux nvidia-smi -L
Quadlet
Quadlet allows to manage Podman containers with systemd.
For rootless Podman, place Quadlet files under one of following directories:
$XDG_CONFIG_HOME/containers/systemd/
or~/.config/containers/systemd/
/etc/containers/systemd/users/UID
for the user matchingUID
/etc/containers/systemd/users/
for all users
For Podman with root permissions, the directory is /etc/containers/systemd/
.
Podman will read Quadlet files with extensions .container, .volume, .network, .pod, and .kube. A corresponding .service file will be generated using systemd.generator(7). The Quadlet files are read during boot or manually by running a daemon-reload.
For example, here is a command that will run Syncthing container from LinuxServer.io:
$ podman run \ --rm \ --replace \ --label io.containers.autoupdate=registry \ --name syncthing \ --hostname=syncthing \ --uidmap 1000:0:1 \ --uidmap 0:1:1000 \ --uidmap 1001:1001:64536 \ --env PUID=1000 \ --env PGID=1000 \ --env TZ=Etc/UTC \ --publish 127.0.0.1:8384:8384/tcp \ --publish 22000:22000/tcp \ --volume /path/to/syncthing/config:/config \ --volume /path/to/data1:/data1 \ lscr.io/linuxserver/syncthing:latest
To manage it as a systemd service, create the following Quadlet file:
~/.config/containers/systemd/syncthing-lsio.container
[Unit] Description=Syncthing container # Specify the dependencies Wants=network-online.target After=network-online.target nss-lookup.target # If other container depends on this one, use syncthing-lsio.service not syncthing-lsio.container [Container] ContainerName=syncthing Image=lscr.io/linuxserver/syncthing:latest # Enable auto-update container AutoUpdate=registry Volume=/path/to/syncthing/config:/config Volume=/path/to/data1:/data1 HostName=syncthing PublishPort=127.0.0.1:8384:8384/tcp PublishPort=22000:22000/tcp Environment=PUID=1000 Environment=PGID=1000 Environment=TZ=Etc/UTC # UID mapping is needed to run linuxserver.io container as rootless podman. # This will map UID=1000 inside the container to intermediate UID=0. # For rootless podman intermediate UID=0 with be mapped to the UID of current user. UIDMap=1000:0:1 UIDMap=0:1:1000 UIDMap=1001:1001:64536 [Service] Restart=on-failure # Extend Timeout to allow time to pull the image TimeoutStartSec=300 # The [Install] section allows enabling the generated service. [Install] WantedBy=default.target
Then reload and start/enable syncthing-lsio.service
user unit (with the --user
flag).
Valid options for the Container
section are listed under podman-systemd.unit(5) § Container units [Container]. PodmanArgs=
can be used to add other Podman arguments that do not have corresponding file options.
See podman-systemd.unit(5) § EXAMPLES for more examples including Volume
and Network
units.
Images
/etc/containers/registries.conf
at unqualified-search-registries
in the defined order. The following images will always contain the prefix, to allow for configurations without docker.io
in the configuration.Arch Linux
The following command pulls the Arch Linux x86_64 image from Docker Hub.
# podman pull docker.io/archlinux
See the Docker Hub page for a full list of available tags, including versions with and without build tools.
See also README.md.
Alpine Linux
Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image from Docker Hub:
# podman pull docker.io/alpine
Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented in https://wiki.musl-libc.org/functional-differences-from-glibc.html.
Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [2], [3] and [4] for examples.
CentOS
The following command pulls the latest CentOS image from Docker Hub:
# podman pull docker.io/centos
See the Docker Hub page for a full list of available tags for each CentOS release.
Debian
The following command pulls the latest Debian image from Docker Hub:
# podman pull docker.io/debian
See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.
Troubleshooting
Add pause to process
WARN[0000] Failed to add pause process to systemd sandbox cgroup: Process org.freedesktop.systemd1 exited with status 1
Can be solved using: https://github.com/containers/crun/issues/704
# echo +cpu +cpuset +io +memory +pids > /sys/fs/cgroup/cgroup.subtree_control
Container DNS will not be enabled
WARN[0000] binary not found, container DNS will not be enabled
When using netavark as the podman network backend, you need to install aardvark-dns.
Containers terminate on shell logout
After logging out from machine, Podman containers are stopped for some users. To prevent that, enable lingering for users running containers.
You can also create user systemd unit as described in podman-auto-update(1) § EXAMPLES.
Failed to move rootless netns
$ docker-compose up
ERRO[0000] failed to move the rootless netns slirp4netns process to the systemd user.slice: Process org.freedesktop.systemd1 exited with status 1
Can be solved by starting/enabling podman.service
.
Error building pause image after Podman upgrade 3.x to 4.0
Error: building local pause image: finding pause binary: exec: "catatonit": executable file not found in $PATH
Install the catatonit package to fix the error.
For details on upgrading from 3.x to 4.0, see the official blog article.
Error on commit in rootless mode
Error committing the finished image: error adding layer with blob "sha256:02823fca9b5444c196f1f406aa235213254af9909fca270f462e32793e2260d8": Error processing tar file(exit status 1) permitted operation
Check that the storage driver is overlay in the storage configuration.
Error when creating a container with bridge network in rootless mode
If you are using AppArmor you might end up with problems when creating container using a bridge network with the dnsname
plugin enabled:
$ podman network create foo
/home/user/.config/cni/net.d/foo.conflist
$ podman run --rm -it --network=foo docker.io/library/alpine:latest ip addr
Error: command rootless-cni-infra [alloc 89398a9315256cb1938075c377275d29c2b6ebdd75a96b5c26051a89541eb928 foo festive_hofstadter ] in container 1f4344bbd1087c892a18bacc35f4fdafbb61106c146952426488bc940a751efe failed with status 1, stdout="", stderr="exit status 3\n"
This can be solved by adding the following lines to /etc/apparmor.d/local/usr.sbin.dnsmasq
:
owner /run/user/[0-9]*/containers/cni/dnsname/*/dnsmasq.conf r, owner /run/user/[0-9]*/containers/cni/dnsname/*/addnhosts r, owner /run/user/[0-9]*/containers/cni/dnsname/*/pidfile rw,
And then reloading the AppArmor profile:
# apparmor_parser -R /etc/apparmor.d/usr.sbin.dnsmasq # apparmor_parser /etc/apparmor.d/usr.sbin.dnsmasq
No image found
By default, the registry list is not populated as the files in the package come from upstream. This means that by default, trying to pull any image without specifying the registry will result in an error similar to the following:
Error: short-name "archlinux" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
A starting configuration could be the following:
/etc/containers/registries.conf.d/00-unqualified-search-registries.conf
unqualified-search-registries = ["docker.io"]
/etc/containers/registries.conf.d/01-registries.conf
[[registry]] location = "docker.io"
This is equivalent to the default docker configuration.
A less convenient alternative, but having a higher compatibility with systems without configured shortnames, use the full registry path in the Containerfile
or Dockerfile
.
Containerfile
FROM docker.io/archlinux/archlinux
Permission denied: OCI permission denied
$ podman exec openvas_openvas_1 bash
Error: crun: writing file `/sys/fs/cgroup/user.slice/user-1000.slice/user@1000.service/user.slice/libpod-b3e8048a9b91e43c214b4d850ac7132155a684d6502e12e22ceb6f73848d117a.scope/container/cgroup.procs`: Permission denied: OCI permission denied
Can be solved: BBS#253966
$ env DBUS_SESSION_BUS_ADDRESS= podman ... $ env DBUS_SESSION_BUS_ADDRESS= podman-compose ...
Pushing images to Docker Hub: access denied/authentication required
When using podman push
to push container images to Docker Hub, the following errors could occur: Requested access to the resource is denied
or Authentication required
. The following hints can help to fix potential issues:
- Tag the local image:
# podman tag <localImage> docker.io/<dockerHubUsername>/<dockerHubRepository>:<Tag>
- Push the tagged image:
# podman push docker.io/<dockerHubUsername>/<dockerHubRepository>:<Tag> docker://docker.io/<dockerHubUsername>/<dockerHubRepository>:<Tag>
- Login to docker.io, the Docker Hub repository and Docker Hub Registry server:
# podman login -u <DockerHubUsername> -p <DockerHubPassword> registry-1.docker.io # podman login -u <DockerHubUsername> -p <DockerHubPassword> docker.io/<dockerHubUsername>/<dockerHubRepository> # podman login -u <DockerHubUsername> -p <DockerHubPassword> docker.io
- Logout from all registries before the login, e.g.,
# podman logout --all
- Add
<dockerHubUsername>
as collaborator in the Docker Hub Collaborators tab of the repository
Buildah/Podman running as rootless expects the bind mount to be shared, check if it is set to private:
$ findmnt -o PROPAGATION /
PROPAGATION private
In this case see mount(8) § Shared_subtree_operations and set temporarily the mount as shared with:
# mount --make-shared /
To set it permanently edit /etc/fstab and add the shared option to the desired mount and reboot. It will result in a entry like:
/etc/fstab
# <device> <dir> <type> <options> <dump> <fsck> UUID=0a3407de-014b-458b-b5c1-848e92a327a3 / ext4 defaults,shared 0 1
Containers with restart policy do not start automatically
Start/enable podman-restart.service
.