From ArchWiki
Jump to navigation Jump to search

Arch Linux bootstrap-based Docker image build setup

I have recently come up with an Arch Linux Docker base image build setup (, based on bootstrap tarball. Compared to the shell script approach (, it has the benefit of enabling Arch Linux Docker image builds on non-Arch hosts, and does not require root.

What do you think of it? I tried getting some attention on Arch forum ( but no reply yet. Maybe I'm re-inveting the wheel? I was thinking not, as no similar solution is documented here on the Wiki. Please let me know.

In the forum topic I mentioned I'm asking abot 3 things I need to sort out in order to call the whole thing done. I'll appreciate some input.

docker0 Bridge gets no IP / no internet access in containers

I want to rewrite this section: based on my experience with systemd 232 and Docker 1.13, creating /etc/systemd/network/ file as suggested by that section introduces problems where bridges created by Docker loose their IP addresses once all containers using those bridges are stopped, and don't regain the IP. Ektich (talk) 09:42, 2 March 2017 (UTC)

Storage driver section regarding overlay2

The wording of Arch Linux using overlay2 suggests that the default storage driver is overlay2. From what I can tell, the default storage driver is devicemapper. Perhaps the section should say that there is work being done to make overlay2 the default storage driver and reference a Github issue or something like that. --Dmp1ce (talk) 01:21, 2 April 2017 (UTC)

Storage driver devicemapper clarification

It is true that devicemapper in loopback mode should never be used outside of development but if a proper LVM volume is being used (no loopback), performance is not degraded in any way. Devicemapper non-loopback is the preferred local storage driver on CentOS and RHEL. The wording should probably be changed to say that devicemapper is acceptable to use but there should be a proper backing for it and not just a loopback LVM volume. --MrOwen (talk) 23:19, 31 October 2017 (UTC)

Root equivalent through other means than the docker group

I really wouldn't call myself good at docker so I don't feel confident enough to edit this myself. But as far as I've understood, the `root equivalent` warning in the Installation section should at least be added to the Remote API section and probably some others too. Or maybe not everywhere, but some sort of indication that the reader is playing with fire depending on how they configure docker. Powersource (talk) 06:38, 12 July 2017 (UTC)

The wiki instructs to *add* user to `docker` group to run docker as regular user and goes on to say that adding regular user to docker group would make it root equivalent. I think the two statements are conflicting. If the user is no longer *regular* after being added to docker group then it does not make sense to say you can even run docker as regular user.

MrHritik (talk) 08:05, 4 April 2020 (UTC)

There are multiple parts to Docker:

  • The Docker Daemon (docker.service), a server process than runs as root
  • The Docker CLI (docker command), a client process that runs as the user that invokes it
  • The Docker container processes, which may run as any user as defined by the USER directive in the Dockerfile used to build the image

When you run a docker run command, the client tells the server to spawn a process. Since the server runs as root, it can spawn container processes that also run as root.

Adding a user to the docker group gives them permission to run the docker CLI command. This user could then escalate to root easily:

Write a Dockerfile:

FROM debian:9
USER root

Build an image from the Dockerfile, and run the image with `docker run myimage -v /:/host/`. A new Docker container process is spawned which runs as root and has root access to the entire host filesystem. can now attack the system.

This is what User Remapping protects against- even if you run an image with USER root, instead of the Docker container running as root it runs as a random UID and GID from the remap range. Dharmab (talk) 16:23, 4 April 2020 (UTC)

PS. The client/server relationship applies even if you don't enable the Remote API. When you run docker locally the server listens through a Unix domain socket. You can also use the Docker API libraries to write code that communicates with the socket directly instead of using the CLI. More accurately, the docker group allows a user to access this domain socket. Dharmab (talk) 16:26, 4 April 2020 (UTC)

I am afraid you are confusing the `User` directive in systemd with the `USER` directive in docker. In the latter, the directive is NOT used to set the user running the container but instead is the user which is used to run the service within the container, see the [official docker documentation]( for more information. Plus, by default the host FS is not mounted into the container but it can be and that is the whole point. Hence, you greater point is valid: having control over `docker.socket` is very much comparable to having root access, see this more in [depth discussion]( -- Edh (talk) 22:20, 4 April 2020 (UTC)
Yes, I oversimplified the host and container user namespaces there, which I see now is confusing. The container is run by the Docker in the host namespace and the process in the container is run in the container's namespace. Dharmab (talk) 14:57, 5 April 2020 (UTC)
I find your last edit to the user namespace section wrong. As far as I understand, the containers are not namespaced by default at all, so the UID/GID used inside the container corresponds exactly to the same UID/GID outside the container. Also, it does not matter if the same user/group name is assigned to each UID/GID inside and outside the container. The original wording was pretty clear and accurate, yours is not. -- Lahwaacz (talk) 17:59, 5 April 2020 (UTC)
Fixed. -- Lahwaacz (talk) 21:01, 2 October 2020 (UTC)

Drop-in snippets instead of /etc/docker/daemon.json. Why?

Is there a reason for promotion of the systemd drop-in snippets in the Wiki page?

Not sure but recommends using /etc/docker/daemon.json to set the storage engine. This approach seems more appropriate to me. --Michaelmcandrew (talk) 12:15, 18 December 2017 (UTC)

Both approaches are equally valid and supported:

--Dharmab (talk)

There don't seem to be any relevant drop-in snippets on the page anymore, closing. -- Lahwaacz (talk) 17:25, 3 October 2020 (UTC)

no connectivity between containers

I experienced that docker containers started with docker-compose can not connect to each other (even on published ports). For me it only helped to disable ip tables filtering for bridges (which is not a good solution as it omits docker security (icc flag useless)

# echo 0 > /proc/sys/net/bridge/bridge-nf-call-iptables

Aatdark (talk) 13:52, 22 December 2017 (UTC)

Docker service failed to start "Error initializing network controller: list bridge addresses failed: no available network"

Good to add solution for:

dockerd[13737]: Error starting daemon: Error initializing network controller: list bridge addresses failed: no available network

which is following that commands:


# create docker0 bridge
# restart docker systemd service
# confirm new outgoing NAT masquerade is set up
# reference

sudo brctl addbr docker0
sudo ip addr add dev docker0
sudo ip link set dev docker0 up
ip addr show docker0
sudo systemctl restart docker
sudo iptables -t nat -L -n


tested and works for me solution found at [1]

—This unsigned comment is by Drathir (talk) 22:57, 9 January 2018‎. Please sign your posts with ~~~~!

Yes, this worked for me. Please go ahead and add this info to the page. axper (talk) 09:06, 16 August 2018 (UTC)

Add a section about podman

The use of podman command is largely encouraged by the Fedora community. This has the main advantages to manage images without any running daemon (no service running in the background) and as a simple user. I am still not sure if podman shall be added here or be described in its own page.

Gabx (talk) 12:03, 23 April 2020 (UTC)

Podman is not Docker and has a separate page. -- Lahwaacz (talk) 21:03, 2 October 2020 (UTC)