Docker: Difference between revisions

From ArchWiki
(→‎HTTP Proxies: Match upstream language)
 
(299 intermediate revisions by 97 users not shown)
Line 3: Line 3:
[[ja:Docker]]
[[ja:Docker]]
[[ru:Docker]]
[[ru:Docker]]
[[zh-hant:Docker]]
[[zh-hans:Docker]]
[[zh-hans:Docker]]
{{Related articles start}}
{{Related articles start}}
{{Related|systemd-nspawn}}
{{Related|systemd-nspawn}}
{{Related|Linux Containers}}
{{Related|Linux Containers}}
{{Related|Lxc-systemd}}
{{Related|Vagrant}}
{{Related|Vagrant}}
{{Related|Podman}}
{{Related articles end}}
{{Related articles end}}
[[Wikipedia:Docker (software)|Docker]] is a utility to pack, ship and run any application as a lightweight container.
[[Wikipedia:Docker (software)|Docker]] is a utility to pack, ship and run any application as a lightweight container.
Line 15: Line 14:
== Installation ==
== Installation ==


[[Install]] the {{Pkg|docker}} package or, for the development version, the {{Aur|docker-git}} package. Next [[start]] and enable {{ic|docker.service}} and verify operation:
To pull Docker images and run Docker containers, you need the Docker Engine. The Docker Engine includes a daemon to manage the containers, as well as the {{ic|docker}} CLI frontend. [[Install]] the {{Pkg|docker}} package or, for the development version, the {{Aur|docker-git}} package. Next [[enable/start]] {{ic|docker.service}} or {{ic|docker.socket}}. Note that {{ic|docker.service}} starts the service on boot, whereas {{ic|docker.socket}} starts docker on first usage [https://github.com/moby/moby/issues/38373#issuecomment-447393517 which can decrease boot times]. Then verify docker's status:


  # docker info
  # docker info


Note that starting the docker service may fail if you have an active vpn connection. If this is the case, try disconnecting the vpn before starting the docker service. You may reconnect the vpn immediately afterwards.  
Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks (see solutions [https://stackoverflow.com/questions/45692255/how-make-openvpn-work-with-docker] or [https://github.com/docker/compose/issues/4336#issuecomment-457326123]).


If you want to be able to run docker as a regular user, add your user to the {{ic|docker}} [[user group]].
Next, verify that you can run containers. The following command downloads the latest [[#Arch Linux|Arch Linux image]] and uses it to run a Hello World program within a container:


{{Warning|Anyone added to the {{ic|docker}} group is root equivalent. More information [https://github.com/docker/docker/issues/9976 here] and [https://docs.docker.com/engine/security/security/ here].}}
# docker run -it --rm archlinux bash -c "echo hello world"


{{Note|As of {{Pkg|linux}} 4.15.0-1 ''vsyscalls'', which are required by certain programs in containers (such as ''apt-get''), have been disabled by default in kernel configuration. To enable them again, add the {{ic|1=vsyscall=emulate}} [[kernel parameter]]. More information in {{bug|57336}}.}}
If you want to be able to run the {{ic|docker}} CLI command as a non-root user, add your user to the {{ic|docker}} [[user group]], re-login, and restart {{ic|docker.service}}.
 
{{Warning|Anyone added to the {{ic|docker}} group is root equivalent because they can use the {{ic|docker run --privileged}} command to start containers with root privileges. For more information see [https://github.com/docker/docker/issues/9976] and [https://docs.docker.com/engine/security/security/].}}
 
=== Docker Compose ===
 
[https://docs.docker.com/compose/ Docker Compose] is an alternate CLI frontend for the Docker Engine, which specifies properties of containers using a {{ic|docker-compose.yml}} [[Wikipedia:YAML|YAML]] file rather than, for example, a script with {{ic|docker run}} options. This is useful for setting up reoccuring services that are use often and/or have complex configurations. To use it, [[install]] {{Pkg|docker-compose}}.
 
=== Docker Desktop ===
 
[https://www.docker.com/products/docker-desktop/ Docker Desktop] is a proprietary desktop application that runs the Docker Engine inside a Linux virtual machine. Additional features such as a Kubernetes cluster and a vulnerability scanner are included. This application is useful for software development teams who develop Docker containers using macOS and Windows. The Linux port of the application is relatively new, and complements Docker's CLI frontends [https://www.docker.com/blog/the-magic-of-docker-desktop-is-now-available-on-linux/].
 
An experimental package for Arch is provided directly by Docker; see [https://docs.docker.com/desktop/linux/install/archlinux/ the manual] for more information. Unfortunately, it contains files which conflict with the {{Pkg|docker-compose}} and {{Pkg|docker-buildx}} packages, so you will first need to remove them if installed. Alternatively, you can install {{AUR|docker-desktop}} package that does not conflict with existing packages.
 
Also, to run [https://www.docker.com/products/docker-desktop/ Docker Desktop] you will need to ensure the [https://docs.docker.com/desktop/install/linux-install/ Linux system requirements], including  virtualization support via [[KVM]]. To see a tray icon under Gnome, {{Pkg|gnome-shell-extension-appindicator}} will be needed.
 
Finally, file sharing support requires mapping user and group ids via {{ic|/etc/subuid}} and {{ic|/etc/subgid}}. See the [https://docs.docker.com/desktop/faqs/linuxfaqs/#how-do-i-enable-file-sharing Docker Desktop For Linux File Sharing instructions] for more details.
 
== Usage ==
 
Docker consists of multiple parts:
 
* The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as {{ic|docker.service}}. It serves the Docker API and manages Docker containers.
* The {{ic|docker}} CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon.
* Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.
 
Typically, users use Docker by running {{ic|docker}} CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client ({{ic|docker}}), server ({{ic|docker.service}}) and containers is important to successfully administering Docker.
 
Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.
 
Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the {{ic|docker}} CLI command. See [https://docs.docker.com/engine/api/ the Docker API developer documentation] for more information.
 
See [https://docs.docker.com/get-started/ the Docker Getting Started guide] for more usage documentation.


== Configuration ==
== Configuration ==
The Docker daemon can be configured either through a configuration file at {{ic|/etc/docker/daemon.json}} or by adding command line flags to the {{ic|docker.service}} systemd unit. According to the [https://docs.docker.com/config/daemon/#configure-the-docker-daemon Docker official documentation], the configuration file approach is preferred. If you wish to use the command line flags instead, use [[Systemd#Drop-in files|systemd drop-in files]] to override the {{ic|ExecStart}} directive in {{ic|docker.service}}.
For more information about options in {{ic|daemon.json}} see [https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file dockerd documentation].


=== Storage driver ===
=== Storage driver ===


The docker storage driver (or graph driver) has a huge impact on performance. Its job is to store layers of container images efficiently, that is when several images share a layer, only one layer uses disk space. The compatible option, `devicemapper` offers suboptimal performance, which is outright terrible on rotating disks. Additionally, `devicemappper` is not recommended in production.
The [https://docs.docker.com/storage/storagedriver/select-storage-driver/ storage driver] controls how images and containers are stored and managed on your Docker host. The default {{ic|overlay2}} driver has good performance for most use cases.
 
Users of [[btrfs]] or [[ZFS]] may use the {{ic|btrfs}} or {{ic|zfs}} drivers, each of which take advantage of the unique features of these filesystems. See the [https://docs.docker.com/storage/storagedriver/btrfs-driver/  btrfs driver] and [https://docs.docker.com/storage/storagedriver/zfs-driver/ zfs driver] documentation for more information and step-by-step instructions.
 
=== Daemon socket ===
 
{{Out of date|[https://docs.docker.com/engine/deprecated/#unauthenticated-tcp-connections Unauthenticated TCP connections were deprecated in Docker 26 and are scheduled for removal in Docker 27.]}}
 
By default, the Docker daemon serves the Docker API using a [[Wikipedia:Unix domain socket|Unix socket]] at {{ic|/var/run/docker.sock}}. This is an appropriate option for most use cases.
 
It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers. [https://docs.docker.com/engine/install/linux-postinstall/#allow-access-to-the-remote-api-through-a-firewall] This can be useful for allowing {{ic|docker}} commands on a host machine to access the Docker daemon on a Linux virtual machine, such as an Arch virtual machine on a Windows or macOS system.
 
{{Warning|The Docker API is unencrypted and unauthenticated by default. Remote TCP access to the Docker daemon is equivalent to unsecured remote root access unless [https://docs.docker.com/engine/security/protect-access/ additional production using SSH or TLS is also enabled].}}
 
Note that the default {{ic|docker.service}} file sets the {{ic|-H}} flag by default, and Docker will not start if an option is present in both the flags and {{ic|/etc/docker/daemon.json}} file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 2376:
 
{{hc|/etc/systemd/system/docker.service.d/docker.conf|2=
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376
}}
 
[[Reload]] the systemd daemon and [[restart]] {{ic|docker.service}} to apply changes.
 
=== HTTP Proxies ===
 
There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.
 
==== Docker daemon proxy configuration ====
 
See [https://docs.docker.com/config/daemon/systemd/#httphttps-proxy Docker documentation on configuring Docker daemon to use HTTP proxies].
 
==== Docker container proxy configuration ====
 
See [https://docs.docker.com/network/proxy/#configure-the-docker-client Docker documentation on configuring proxies] for information on how to automatically configure proxies for all containers created using the {{ic|docker}} CLI.
 
=== Configuring DNS ===
 
See [https://docs.docker.com/config/containers/container-networking/#dns-services Docker's DNS documentation] for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.
 
Most DNS resolvers hosted on {{ic|127.0.0.0/8}} are [https://github.com/moby/moby/issues/6388#issuecomment-76124221 not supported] due to conflicts between the container and host network namespaces. Such resolvers are [https://github.com/moby/libnetwork/blob/master/resolvconf/resolvconf.go removed from the container's /etc/resolv.conf]. If this would result in an empty {{ic|/etc/resolv.conf}}, Google DNS is used instead.
 
Additionally, a special case is handled if {{ic|127.0.0.53}} is the only configured nameserver. In this case, Docker assumes the resolver is [[systemd-resolved]] and uses the upstream DNS resolvers from {{ic|/run/systemd/resolve/resolv.conf}}.
 
If you are using a service such as [[dnsmasq]] to provide a local resolver, consider adding a virtual interface with a link local IP address in the {{ic|169.254.0.0/16}} block for dnsmasq to bind to instead of {{ic|127.0.0.1}} to avoid the network namespace conflict.
 
=== Images location ===
 
By default, docker images are located at {{ic|/var/lib/docker}}. They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to {{ic|/mnt/docker}}.


As Arch linux ships new kernels, there is no point using the compatibility option. A good, modern choice is {{ic|overlay2}}.
First, [[stop]] {{ic|docker.service}}, which will also stop all currently running containers and unmount any running images. You may then move the images from {{ic|/var/lib/docker}} to the target destination, e.g. {{ic|cp -r /var/lib/docker /mnt/docker}}.


To see current storage driver, run {{ic|# docker info {{!}} head}}, modern docker installation should already use {{ic|overlay2}} by default.
Configure {{ic|data-root}} in {{ic|/etc/docker/daemon.json}}:


To set your own choice of storage driver, edit {{ic|/etc/docker/daemon.json}} (create it if it does not exist):
{{hc|/etc/docker/daemon.json|2=
{{hc|/etc/docker/daemon.json|2=
{
{
   "storage-driver": "overlay2"
   "data-root": "/mnt/docker"
}
}
}}


Restart {{ic|docker.service}} to apply changes.
=== Insecure registries ===
If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at {{ic|myregistry.example.com:8443}}, configure {{ic|insecure-registries}} in the {{ic|/etc/docker/daemon.json}} file:
{{hc|/etc/docker/daemon.json|2=
{
  "insecure-registries": [
    "my.registry.example.com:8443"
  ]
}
}}
}}


Afterwards, [[restart]] docker.
Restart {{ic|docker.service}} to apply changes.
 
=== IPv6 ===


Further information on options is available on the [https://docs.docker.com/engine/userguide/storagedriver/selectadriver/ user guide].
In order to enable IPv6 support in Docker, you will need to do a few things. See [https://github.com/docker/docker.github.io/blob/c0eb65aabe4de94d56bbc20249179f626df5e8c3/engine/userguide/networking/default_network/ipv6.md] and [https://github.com/moby/moby/issues/36954] for details.
For more information about options in {{ic|daemon.json}} see [https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file dockerd documentation].
 
Firstly, enable the {{ic|ipv6}} setting in {{ic|/etc/docker/daemon.json}} and set a specific IPv6 subnet. In this case, we will use the private {{ic|fd00::/80}} subnet. Make sure to use a subnet at least 80 bits as this allows a container's IPv6 to end with the container's MAC address which allows you to mitigate NDP neighbor cache invalidation issues.
 
{{hc|/etc/docker/daemon.json|
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80"
}
}}
 
[[Restart]] {{ic|docker.service}} to apply changes.  


=== Remote API ===
Finally, to let containers access the host network, you need to resolve routing issues arising from the usage of a private IPv6 subnet. Add the IPv6 NAT in order to actually get some traffic:


To open the Remote API to port {{ic|4243}} manually, run:
# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE


# /usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
Now Docker should be properly IPv6 enabled. To test it, you can run:


{{ic|-H tcp://0.0.0.0:4243}} part is for opening the Remote API.
# docker run curlimages/curl curl -v -6 archlinux.org


{{ic|-H unix:///var/run/docker.sock}} part for host machine access via terminal.
If you use [[firewalld]], you can add the rule like this:


==== Remote API with systemd ====
# firewall-cmd --zone=public --add-rich-rule='rule family="ipv6" destination not address="fd00::1/80" source address="fd00::/80" masquerade'


To start the remote API with the docker daemon, create a [[Drop-in snippet]] with the following content:
If you use [[ufw]], you need to first enable ipv6 forwarding following [[Uncomplicated Firewall#Forward policy]]. Next you need to edit {{ic|/etc/default/ufw}} and uncomment the following lines


{{hc|/etc/systemd/system/docker.service.d/override.conf|2=
{{hc|head=/etc/ufw/sysctl.conf|output=
[Service]
net/ipv6/conf/default/forwarding=1
ExecStart=
net/ipv6/conf/all/forwarding=1
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock
}}
}}


=== Daemon socket configuration ===
Then you can add the iptables rule:


The ''docker'' daemon listens to a [[Wikipedia:Unix domain socket|Unix socket]] by default. To listen on a specified port instead, create a [[Drop-in snippet]] with the following content:
# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE


{{hc|/etc/systemd/system/docker.socket.d/socket.conf|2=
It should be noted that, for docker containers created with ''docker-compose'', you may need to set {{ic|enable_ipv6: true}} in the {{ic|networks}} part for the corresponding network. Besides, you may need to configure the IPv6 subnet. See [https://docs.docker.com/compose/compose-file/compose-file-v2/#ipv4_address-ipv6_address] for details.
[Socket]
ListenStream=0.0.0.0:2375
}}


=== Proxies ===
=== User namespace isolation ===


Proxy configuration is broken down into two. First is the host configuration of the Docker daemon, second is the configuration required for your container to see your proxy.
By default, processes in Docker containers run within the same user namespace as the main {{ic|dockerd}} daemon, i.e. containers are not isolated by the {{man|7|user_namespaces}} feature. This allows the process within the container to access configured resources on the host according to [[Users and groups#Permissions and ownership]]. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was [https://seclists.org/oss-sec/2019/q1/119 published and patched in February 2019].)


==== Proxy configuration ====
The impact of such a vulnerability can be reduced by enabling [https://docs.docker.com/engine/security/userns-remap/ user namespace isolation]. This runs each container in a separate user namespace and maps the UIDs and GIDs inside that user namespace to a different (typically unprivileged) UID/GID range on the host.


Create a [[Drop-in snippet]] with the following content:
{{Note|
{{hc|/etc/systemd/system/docker.service.d/proxy.conf|2=
* The main {{ic|dockerd}} daemon still runs as {{ic|root}} on the host. Running Docker in [https://docs.docker.com/engine/security/rootless/ rootless mode] is a different feature.
[Service]
* Processes in the container are started as the user defined in the [https://docs.docker.com/engine/reference/builder/#user USER] directive in the Dockerfile used to build the image of the container.
Environment="HTTP_PROXY=192.168.1.1:8080"
* All containers are mapped into the same UID/GID range. This preserves the ability to share volumes between containers.
Environment="HTTPS_PROXY=192.168.1.1:8080"
* Enabling user namespace isolation has [https://docs.docker.com/engine/security/userns-remap/#user-namespace-known-limitations several limitations].
* Enabling user namespace isolation effectively masks existing image and container layers, as well as other Docker objects in {{ic|/var/lib/docker/}}, because Docker needs to adjust the ownership of these resources. The upstream documentation recommends to enable this feature on a new Docker installation rather than an existing one.
}}
}}


{{Note|This assumes {{ic|192.168.1.1}} is your proxy server, do not use {{ic|127.0.0.1}}.}}
Configure {{ic|userns-remap}} in {{ic|/etc/docker/daemon.json}}. {{ic|default}} is a special value that will automatically create a user and group named {{ic|dockremap}} for use with remapping.


Verify that the configuration has been loaded:
{{hc|/etc/docker/daemon.json|2=
 
{
{{hc|# systemctl show docker --property Environment|2=
  "userns-remap": "default"
Environment=HTTP_PROXY=192.168.1.1:8080 HTTPS_PROXY=192.168.1.1:8080
}
}}
}}


==== Container configuration ====
Configure {{ic|/etc/subuid}} and {{ic|/etc/subgid}} with a username/group name, starting UID/GID and UID/GID range size to allocate  to the remap user and group. This example allocates a range of 65536 UIDs and GIDs starting at 165536 to the {{ic|dockremap}} user and group.


The settings in the {{ic|docker.service}} file will not translate into containers. To achieve this you must set {{ic|ENV}} variables in your {{ic|Dockerfile}} thus:
{{hc|/etc/subuid|dockremap:165536:65536}}


FROM base/archlinux
{{hc|/etc/subgid|dockremap:165536:65536}}
ENV http_proxy="<nowiki>http://192.168.1.1:3128</nowiki>"
ENV https_proxy="<nowiki>https://192.168.1.1:3128</nowiki>"


[https://docs.docker.com/engine/reference/builder/#env Docker] provide detailed information on configuration via {{ic|ENV}} within a Dockerfile.
Restart {{ic|docker.service}} to apply changes.


=== Configuring DNS ===
After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passing the {{ic|1=--userns=host}} flag to the {{ic|docker}} command. See [https://docs.docker.com/engine/security/userns-remap/#disable-namespace-remapping-for-a-container] for details.


By default, docker will make {{ic|resolv.conf}} in the container match {{ic|/etc/resolv.conf}} on the host machine, filtering out local addresses (e.g. {{ic|127.0.0.1}}).  If this yields an empty file, then [https://developers.google.com/speed/public-dns/ Google DNS servers] are used.  If you are using a service like [[dnsmasq]] to provide name resolution, you may need to add an entry to the {{ic|/etc/resolv.conf}} for docker's network interface so that it is not filtered out.
=== Rootless Docker daemon ===


=== Running Docker with a manually-defined network on systemd-networkd ===
{{Note|Docker rootless relies on the unprivileged user namespaces ({{ic|CONFIG_USER_NS_UNPRIVILEGED}}). This is enabled by default in {{Pkg|linux}}, {{Pkg|linux-lts}}, and {{Pkg|linux-zen}} kernels. Users of other kernels may need to enable it. This has some security implications, see [[Security#Sandboxing applications]] for details.}}


If you manually configure your network using [[systemd-networkd]] version '''220 or higher''', containers you start with Docker may be unable to access your network. Beginning with version 220, the forwarding setting for a given network ({{ic|net.ipv4.conf.<interface>.forwarding}}) defaults to {{ic|off}}. This setting prevents IP forwarding. It also conflicts with Docker which enables the {{ic|net.ipv4.conf.all.forwarding}} setting within a container.
To run the Docker daemon itself as a regular user, [[install]] the {{aur|docker-rootless-extras}} package.


A workaround is to edit the {{ic|<interface>.network}} file in {{ic|/etc/systemd/network/}}, adding {{ic|1=IPForward=kernel}} on the Docker host:
Configure {{ic|/etc/subuid}} and {{ic|/etc/subgid}} with a username/group name, starting UID/GID and UID/GID range size to allocate  to the remap user and group.


{{hc|/etc/systemd/network/<interface>.network|2=
{{hc|/etc/subuid|your_username:165536:65536}}
[Network]
...
IPForward=kernel
...}}


This configuration allows IP forwarding from the container as expected.
{{hc|/etc/subgid|your_username:165536:65536}}


=== Images location ===
[[Enable]] the {{ic|docker.socket}} [[user unit]]: this will result in docker being started using systemd's socket activation.


By default, docker images are located at {{ic|/var/lib/docker}}. They can be moved to other partitions.
Finally set docker socket [[environment variable]]:
First, [[stop]] the {{ic|docker.service}}.


If you have run the docker images, you need to make sure the images are unmounted totally. Once that is completed, you may move the images from {{ic|/var/lib/docker}} to the target destination.
$ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock


Then add a [[Drop-in snippet]] for the {{ic|docker.service}}, adding the {{ic|--data-root}} parameter to the {{ic|ExecStart}}:
=== Enable native overlay diff engine ===


{{hc|/etc/systemd/system/docker.service.d/docker-storage.conf|2=
{{Accuracy|This may not be necessary on your system. Though {{ic|1=metacopy=on redirect_dir=on}} is the default on Arch Linux kernels, some report those settings getting disabled during runtime.|section=Native overlay diff}}
[Service]
ExecStart=  
ExecStart=/usr/bin/dockerd --data-root=''/path/to/new/location/docker'' -H fd://}}


=== Insecure registries ===
By default, Docker cannot use the native overlay diff engine on Arch Linux, which makes building Docker images slow. If you frequently build images, configure the native diff engine as described in [https://mikeshade.com/posts/docker-native-overlay-diff/]:


If you decide to use a self signed certificate for your private registry, Docker will refuse to use it until you declare that you trust it.
{{hc|/etc/modprobe.d/disable-overlay-redirect-dir.conf|2=
Add a [[Drop-in snippet]] for the {{ic|docker.service}}, adding the {{ic|--insecure-registry}} parameter to the {{ic|dockerd}}:
options overlay metacopy=off redirect_dir=off
{{hc|/etc/systemd/system/docker.service.d/override.conf|2=
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry my.registry.name:5000
}}
}}
Then [[stop]] {{ic|docker.service}}, reload the {{ic|overlay}} module as follows:
# modprobe -r overlay
# modprobe overlay
You can then [[start]] {{ic|docker.service}} again.
To verify, run {{ic|docker info}} and check that {{ic|Native Overlay Diff}} is {{ic|true}}.


== Images ==
== Images ==
=== Arch Linux ===
=== Arch Linux ===
The following command pulls the [https://hub.docker.com/r/archlinux/base/ archlinux/base] x86_64 image. This is a stripped down version of Arch core without network, etc.


  # docker pull archlinux/base
The following command pulls the [https://hub.docker.com/_/archlinux/ archlinux] x86_64 image. This is a stripped down version of Arch core without network, etc.
 
  # docker pull archlinux
 
See also [https://gitlab.archlinux.org/archlinux/archlinux-docker/blob/master/README.md README.md].
 
For a full Arch base, clone the repository from above and build your own image.
 
$ git clone https://gitlab.archlinux.org/archlinux/archlinux-docker.git
 
Make sure that the {{Pkg|devtools}}, {{Pkg|fakechroot}} and {{Pkg|fakeroot}} packages are installed.


See also [https://github.com/archlinux/archlinux-docker/blob/master/README.md README.md].
To build the base image:  


For a full Arch base, clone the repo from above and build your own image.
$ make image-base


$ git clone https://github.com/archlinux/archlinux-docker.git
=== Alpine Linux ===


Edit the packages file so it only contains 'base'. Then run:  
[https://www.alpinelinux.org/ Alpine Linux] is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:


  # make docker-image
  # docker pull alpine
 
Alpine Linux uses the [https://musl.libc.org/ musl] libc implementation instead of the [https://www.gnu.org/software/libc/ glibc] libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented [https://wiki.musl-libc.org/functional-differences-from-glibc.html here].
 
Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [https://bugs.python.org/issue32307], [https://superuser.com/questions/1219609/why-is-the-alpine-docker-image-over-50-slower-than-the-ubuntu-image] and [https://pythonspeed.com/articles/alpine-docker-python] for examples.


=== Debian ===
=== Debian ===
The following command pulls the [https://hub.docker.com/r/_/debian/ debian] x86_64 image.
 
The following command pulls the latest [https://hub.docker.com/_/debian debian] image:


  # docker pull debian
  # docker pull debian


==== Manually ====
See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.
Build Debian image with {{Pkg|debootstrap}}:
 
=== Distroless ===


  # mkdir jessie-chroot
Google maintains [https://github.com/GoogleContainerTools/distroless distroless images] which are minimal images without OS components such as package managers or shells, resulting in very small images for packaging software.
  # debootstrap jessie ./jessie-chroot http://http.debian.net/debian/
 
  # cd jessie-chroot
See the GitHub README for a list of images and instructions on their use with various programming languages.
  # tar cpf - . | docker import - debian
 
  # docker run -t -i --rm debian /bin/bash
== Tips and tricks ==
 
=== Get the IP address of a running container ===
 
To grab the IP address of a running container:
 
{{hc|<nowiki>$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> </nowiki>|
172.17.0.37}}
 
For each running container, the name and corresponding IP address can be listed for use in {{ic|/etc/hosts}}:
 
{{bc|#!/usr/bin/env sh
<nowiki>for ID in $(docker ps -q | awk '{print $1}'); do
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
    printf "%s %s\n" "$IP" "$NAME"
done</nowiki>}}
 
=== Run graphical programs inside a container ===
 
This section describes the necessary steps to allow graphical programs (including those that rely on OpenGL or Vulkan) to run on the host's X server.
 
First, the correct drivers, compatible with the '''host's''' graphics hardware, need to be installed inside the container. The installation procedure depends on the type of the container, but for containers based on Arch Linux images, refer to [[OpenGL#Installation]] and [[Vulkan#Installation]] for packages specific to your hardware.
 
Next, the container must be granted access to the host's X server. In a single-user environment, this can easily be done by running [[Xhost]] on the host system, which adds non-network local connections to the access control list:
 
$ xhost +local:
 
Lastly, the following parameters need to be passed to {{ic|docker run}}:
 
* {{ic|-e "DISPLAY{{=}}$DISPLAY"}} sets the environment variable {{ic|DISPLAY}} within the container to the host's display;
* {{ic|1=--mount type=bind,src=/tmp/.X11-unix,dst=/tmp/.X11-unix}} mounts the host's X server sockets inside the container under the same path;
* {{ic|--device{{=}}/dev/dri:/dev/dri}} gives the container access to [[wikipedia:Direct Rendering Infrastructure|Direct Rendering Infrastructure]] devices on the host.
 
To confirm that everything is set up correctly, run {{ic|glxgears}} from the package {{Pkg|mesa-utils}}, or {{ic|vkcube}} from the package {{Pkg|vulkan-tools}} in the container.
 
=== Start Docker Compose projects on boot ===
 
{{Accuracy|This is not necessary with {{ic|restart: always}} in the {{ic|compose.yml}}. [https://docs.docker.com/compose/compose-file/compose-file-v3/#restart]|section="Start Docker Compose projects on boot" Spurious?}}
 
First, create a template [[Systemd#Writing unit files|unit]] for Docker Compose which is parameterized by the name of the service (see {{man|5|systemd.service|SERVICE TEMPLATES}}):
 
{{hc|/etc/systemd/system/docker-compose@.service|2=
[Unit]
Description=%i service with docker compose
Requires=docker.service
After=docker.service
 
[Service]
WorkingDirectory=/opt/%i
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up --remove-orphans
ExecStop=/usr/bin/docker compose down
ExecReload=/usr/bin/docker compose pull
ExecReload=/usr/bin/docker compose up --remove-orphans
 
[Install]
WantedBy=multi-user.target
}}
 
Then, for each service you would like to run, set up a directory with the Compose file and any other required files (such as {{ic|.env}} files) at {{ic|/opt/''project_name''}}. [https://refspecs.linuxfoundation.org/FHS_3.0/fhs/ch03s13.html]
 
Then, [[enable/start]] {{ic|docker-compose@''project_name''.service}}.
 
=== Using buildx for cross-compiling ===
 
The [https://docs.docker.com/build/architecture/#buildx buildx CLI plugin] makes use of the new [https://docs.docker.com/build/buildkit/ BuildKit building toolkit]. [[Install]] the {{Pkg|docker-buildx}} package. The buildx interface supports building multi-platform images, including architectures other than that of the host.
 
QEMU is required to cross-compile images. To setup the static build of QEMU within Docker, see the usage information for the [https://github.com/multiarch/qemu-user-static multiarch/qemu-user-static] image. Otherwise, to setup QEMU on the host system for use with Docker, see [[QEMU#Chrooting into arm/arm64 environment from x86_64]]. In either case, your system will be configured for user-mode emulation of the guest architecture.
 
{{hc|$ docker buildx ls|
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                 
  default default        running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/s390x, linux/arm/v7, linux/arm/v6
}}
 
=== Run GPU accelerated Docker containers with NVIDIA GPUs ===
 
Starting from Docker version 19.03, NVIDIA GPUs are [https://docs.docker.com/config/containers/resource_constraints/ natively supported as Docker devices]. [https://github.com/NVIDIA/nvidia-container-toolkit NVIDIA Container Toolkit] is the recommended way of running containers that leverage NVIDIA GPUs.
 
Install the {{Pkg|nvidia-container-toolkit}} package and [[restart]] docker. You can now run containers that make use of NVIDIA GPUs using the {{ic|--gpus}} option or by registering the NVIDIA container runtime.
 
==== With the --gpus option (recommended) ====
 
# docker run --gpus all nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
Specify how many GPUs are enabled inside a container:
 
# docker run --gpus 2 nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
Specify which GPUs to use:
 
  # docker run --gpus '"device=1,2"' nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
or
 
  # docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
For more information see [https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/index.html the documentation] and [https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html install guide].
 
{{Accuracy|1=More information on when the following error happens is needed. It should work, see [https://aur.archlinux.org/cgit/aur.git/tree/nvidia-container-toolkit.install?h=nvidia-container-toolkit]{{Dead link|2023|04|23|status=404}}.|section=GPU accelerated Docker Nvidia}}
 
If, when using the above commands, you receive an error such as {{ic|Failed to initialize NVML: Unknown Error}}, you can try being more specific in specifying the GPU:
 
  # docker run --gpus all --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia0:/dev/nvidia0 nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):
 
  # docker run --gpus all,capabilities=utility nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi
 
==== With NVIDIA container runtime ====
 
Register the NVIDIA runtime by editing {{ic|/etc/docker/daemon.json}}
 
{{hc|/etc/docker/daemon.json|2=
{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  }
}
}}
 
and then [[restart]] docker.
 
The runtime can also be registered via a command line option to ''dockerd'':
 
# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
 
Afterwards GPU accelerated containers can be started with
 
  # docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
 
See also [https://github.com/NVIDIA/nvidia-container-toolkit/tree/main/cmd/nvidia-container-runtime README.md].
 
==== Arch Linux image with CUDA ====
 
You can use the following {{ic|Dockerfile}} to build a custom Arch Linux image with CUDA. It uses the [https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md Dockerfile frontend syntax 1.2] to cache pacman packages on the host. The {{ic|1=DOCKER_BUILDKIT=1}} [[environment variable]] must be set on the client before building the Docker image.
 
{{hc|Dockerfile|<nowiki>
# syntax = docker/dockerfile:1.2
 
FROM archlinux
 
# install packages
RUN --mount=type=cache,sharing=locked,target=/var/cache/pacman \
    pacman -Syu --noconfirm --needed base base-devel cuda
 
# configure nvidia container runtime
# https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
</nowiki>}}


== Remove Docker and images ==
== Remove Docker and images ==
Line 204: Line 473:
  # docker kill <CONTAINER ID>
  # docker kill <CONTAINER ID>


Delete all containers listed by ID:
Delete containers listed by ID:


  # docker rm <CONTAINER ID>
  # docker rm <CONTAINER ID>
Line 212: Line 481:
  # docker images
  # docker images


Delete all images by ID:
Delete images by ID:


  # docker rmi <IMAGE ID>
  # docker rmi <IMAGE ID>


Delete all Docker data (purge directory):
Delete all images, containers, volumes, and networks that are not associated with a container (dangling):
 
# docker system prune


{{Accuracy|Doing # rm -R /var/lib/docker will left behind the btrfs subvolumes of the removed containers}}
To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:


  # rm -R /var/lib/docker
  # docker system prune -a


== Useful tips ==
Delete all Docker data (purge directory):


To grab the IP address of a running container:
# rm -R /var/lib/docker


{{hc|<nowiki>$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> </nowiki>|
== Troubleshooting ==
172.17.0.37}}


For each running container, the name and corresponding IP address can be listed for use in {{ic|/etc/hosts}}:
=== docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd ===


{{bc|#!/usr/bin/env sh
Docker attempts to enable IP forwarding globally, but by default [[systemd-networkd]] overrides the global sysctl setting for each defined network profile. Set {{ic|1=IPForward=yes}} in the network profile. See [[Internet sharing#Enable packet forwarding]] for details.
<nowiki>for ID in $(docker ps -q | awk '{print $1}'); do
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
    printf "%s %s\n" "$IP" "$NAME"
done</nowiki>}}


== Troubleshooting ==
When [[systemd-networkd]] tries to manage the network interfaces created by Docker, e.g. when you configured {{ic|1=Name=*}} or {{ic|1=Type=ether}} in the {{ic|Match}} section, this can lead to connectivity issues. The problem should be solved by matching interfaces more specifically, i.e. avoid using {{ic|1=Name=*}} or {{ic|1=Type=ether}} or other wildcard that matches an interface managed by Docker. Verify that {{ic|networkctl list}} reports {{ic|unmanaged}} in the SETUP column for all networks created by Docker.
=== docker0 Bridge gets no IP / no internet access in containers ===


Docker enables IP forwarding by itself, but by default [[systemd-networkd]] overrides the respective sysctl setting. Set {{ic|1=IPForward=yes}} in the network profile. See [[Internet sharing#Enable packet forwarding]] for details.
{{Note|
* You may need to [[restart]] {{ic|docker.service}} each time you [[restart]] {{ic|systemd-networkd.service}} or {{ic|iptables.service}}.
* Also be aware that [[nftables]] may block docker connections by default. Use {{ic|nft list ruleset}} to check for blocking rules. {{ic|nft flush chain inet filter forward}} removes all forwarding rules temporarily. Edit {{ic|/etc/nftables.conf}} to make changes permanent. Remember to [[restart]] {{ic|nftables.service}} to reload rules from the configuration file. See [https://github.com/moby/moby/issues/26824] for details about nftables support in Docker.
}}


=== Default number of allowed processes/threads too low ===
=== Default number of allowed processes/threads too low ===
Line 252: Line 519:
  fork failed: Resource temporarily unavailable
  fork failed: Resource temporarily unavailable


then you might need to adjust the number of processes allowed by systemd. The default is 500 (see {{ic|system.conf}}), which is pretty small for running several docker containers. [[Edit]] the {{ic|docker.service}} with the following snippet:
then you might need to adjust the number of processes allowed by systemd. [[Edit]] the {{ic|docker.service}} with the following snippet:
 
[Service]
TasksMax=infinity


{{hc|# systemctl edit docker.service|2=
For more background, look for {{ic|DefaultLimitNPROC}} at {{man|5|systemd-system.conf|OPTIONS}}. And for {{ic|TasksMax}} at {{man|5|systemd.resource-control|OPTIONS}}.
[Service]
TasksMax=infinity
}}


=== Error initializing graphdriver: devmapper ===
=== Error initializing graphdriver: devmapper ===
Line 268: Line 535:


=== Failed to create some/path/to/file: No space left on device ===
=== Failed to create some/path/to/file: No space left on device ===
If you are getting an error message like this:
If you are getting an error message like this:


Line 278: Line 546:
* XFS quota mount options ({{ic|uquota}}, {{ic|gquota}}, {{ic|prjquota}}, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a [[kernel parameter]] {{ic|1=rootflags=}}. Subsequently, it should not be listed among mount options in {{ic|/etc/fstab}} for the root ({{ic|/}}) filesystem.
* XFS quota mount options ({{ic|uquota}}, {{ic|gquota}}, {{ic|prjquota}}, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a [[kernel parameter]] {{ic|1=rootflags=}}. Subsequently, it should not be listed among mount options in {{ic|/etc/fstab}} for the root ({{ic|/}}) filesystem.


{{Note|There are some differences of XFS Quota compared to standard Linux [[Disk quota]], [http://inai.de/linux/adm_quota] may be worth reading.}}
{{Note|There are some differences of XFS Quota compared to standard Linux [[Disk quota]], [https://inai.de/linux/adm_quota] may be worth reading.}}
 
=== Docker-machine fails to create virtual machines using the virtualbox driver ===
 
In case docker-machine fails to create the VM's using the virtualbox driver, with the following:
 
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
 
Simply reload the virtualbox via CLI with {{ic|vboxreload}}.
 
=== Starting Docker breaks KVM bridged networking ===
 
The issue is that Docker's scripts add some iptables rules to block forwarding on other interfaces other than its own. This is a [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865975 known issue].
 
Adjust the solutions below to replace br0 with your own bridge name.
 
Quickest fix (but turns off all Docker's iptables self-added adjustments, which you may not want):
 
{{hc|/etc/docker/daemon.json|2=
{
  "iptables": false
}
}}
 
If there is already a network bridge configured for KVM, this may be fixable by telling docker about it.  See [https://muthii.com/blog/?p=540] where docker configuration is modified as:
 
{{hc|/etc/docker/daemon.json|2=
{
  "bridge": "br0"
}
}}
 
If the above does not work, or you prefer to solve the issue through iptables directly, or through a manager like UFW, add this:
 
{{bc|iptables -I FORWARD -i br0 -o br0 -j ACCEPT}}
 
Even more detailed solutions are [https://serverfault.com/questions/963759/docker-breaks-libvirt-bridge-network here].
 
=== Image pulls from Docker Hub are rate limited ===
 
Beginning on November 1st 2020, rate limiting is enabled for downloads from Docker Hub from anonymous and free accounts. See the [https://docs.docker.com/docker-hub/download-rate-limit/ rate limit documentation] for more information.
 
Unauthenticated rate limits are tracked by source IP. Authenticated rate limits are tracked by account.
 
If you need to exceed the rate limits, you can either [https://www.docker.com/pricing sign up for a paid plan] or mirror the images you need to a different image registry. You can [https://docs.docker.com/registry/ host your own registry] or use a cloud hosted registry such as [https://aws.amazon.com/ecr/ Amazon ECR], [https://cloud.google.com/container-registry/ Google Container Registry], [https://azure.microsoft.com/en-us/services/container-registry/ Azure Container Registry] or [https://quay.io/ Quay Container Registry].
 
To mirror an image, use the {{ic|pull}}, {{ic|tag}} and {{ic|push}} subcommands of the Docker CLI. For example, to mirror the {{ic|1.19.3}} tag of the [[Nginx]] image to a registry hosted at {{ic|cr.example.com}}:
 
$ docker pull nginx:1.19.3
$ docker tag nginx:1.19.3 cr.example.com/nginx:1.19.3
$ docker push cr.example.com/nginx:1.19.3
 
You can then pull or run the image from the mirror:
 
$ docker pull cr.example.com/nginx:1.19.3
$ docker run cr.example.com/nginx:1.19.3
 
=== iptables (legacy): unknown option "--dport" ===
 
{{Accuracy|[[Nftables#Working with Docker]] advises to not use {{Pkg|iptables-nft}}.}}
 
If you see this error when running a container, install {{Pkg|iptables-nft}} instead of {{Pkg|iptables}} (legacy) and reboot[https://bbs.archlinux.org/viewtopic.php?id=256709].
 
=== "Your password will be stored unencrypted" when running docker login ===
 
[https://docs.docker.com/engine/reference/commandline/login/#credentials-store By default] Docker will try to use the {{ic|pass}} or {{ic|secretservice}} binaries to store your registry passwords. If they are not found, it will store them in plain text (base64-encoded) in {{ic|$HOME/.docker/config.json}} and print the following message after successfully logging in:
 
$ WARNING! Your password will be stored unencrypted in /home/''username''/.docker/config.json.
 
If you are using a password manager that implements the [https://specifications.freedesktop.org/secret-service/latest/ Secret Service Freedesktop DBUS API], like KDE's {{Pkg|kwallet}} or GNOME's {{Pkg|gnome-keyring}}, you can install the {{AUR|docker-credential-secretservice}} package to store your passwords in them.
 
=== "Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network" ===
 
Sometimes if you use a lot of Docker projects (ex. using docker-compose) it can happens that you run out of available IPs for Docker containers triggering the error:
 
Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network
 
As found on [https://github.com/docker/docs/issues/8663 this Docker issue], the defaults are:
 
{| class="wikitable"
! Type !! Default Size !! Default Pool
|-
| local || /16 || 172.17.0.0/12
|-
| local* || /20 || 192.168.0.0/16
|}
 
This can be easily fixed increasing the Docker IP space by configuring {{ic|default-address-pools}} in {{ic|/etc/docker/daemon.json}} increasing the size value from 16 to 24 on the first IP range, keeping the second one unaltered to avoid ip collision on the local network:
 
{{hc|/etc/docker/daemon.json|2=
{
  ...
  "default-address-pools" : [
    {
      "base" : "172.17.0.0/12",
      "size" : 24
    },
    {
      "base" : "192.168.0.0/16",
      "size" : 24
    }
  ]
}
}}
 
Restart {{ic|docker.service}} to apply changes.
 
More details and technical explanations can be found on the following excellent article: [https://straz.to/2021-09-08-docker-address-pools/ The definitive guide to docker's default-address-pools option].
 
=== Slow golang compilation ===
 
Due to a ulimit configuration, building a docker image and its dependances with makepkg is very slow (stuck at "Entering fakeroot environment..." step).
 
It is related to [https://github.com/moby/moby/issues/45436] and [https://github.com/containerd/containerd/pull/7566].
 
You can add {{ic|1=--ulimit "nofile=1024:524288"}} to your docker build option or create/edit:
 
{{hc|/etc/docker/daemon.json|
{
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Soft": 1024,
      "Hard": 524288
    }
  }
}
}}


== See also ==
== See also ==
Line 284: Line 679:
* [https://www.docker.com Official website]
* [https://www.docker.com Official website]
* [https://docs.docker.com/engine/installation/linux/archlinux/ Arch Linux on docs.docker.com]
* [https://docs.docker.com/engine/installation/linux/archlinux/ Arch Linux on docs.docker.com]
* [http://opensource.com/business/14/7/docker-security-selinux Are Docker containers really secure?] — opensource.com
* [https://opensource.com/business/14/7/docker-security-selinux Are Docker containers really secure?] — opensource.com
* [https://awesome-docker.netlify.com/ Awesome Docker]
* [https://www.trendmicro.com/en_us/research/19/l/why-running-a-privileged-container-in-docker-is-a-bad-idea.html Why A Privileged Container in Docker Is a Bad Idea]

Latest revision as of 18:22, 29 March 2024

Docker is a utility to pack, ship and run any application as a lightweight container.

Installation

To pull Docker images and run Docker containers, you need the Docker Engine. The Docker Engine includes a daemon to manage the containers, as well as the docker CLI frontend. Install the docker package or, for the development version, the docker-gitAUR package. Next enable/start docker.service or docker.socket. Note that docker.service starts the service on boot, whereas docker.socket starts docker on first usage which can decrease boot times. Then verify docker's status:

# docker info

Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks (see solutions [1] or [2]).

Next, verify that you can run containers. The following command downloads the latest Arch Linux image and uses it to run a Hello World program within a container:

# docker run -it --rm archlinux bash -c "echo hello world"

If you want to be able to run the docker CLI command as a non-root user, add your user to the docker user group, re-login, and restart docker.service.

Warning: Anyone added to the docker group is root equivalent because they can use the docker run --privileged command to start containers with root privileges. For more information see [3] and [4].

Docker Compose

Docker Compose is an alternate CLI frontend for the Docker Engine, which specifies properties of containers using a docker-compose.yml YAML file rather than, for example, a script with docker run options. This is useful for setting up reoccuring services that are use often and/or have complex configurations. To use it, install docker-compose.

Docker Desktop

Docker Desktop is a proprietary desktop application that runs the Docker Engine inside a Linux virtual machine. Additional features such as a Kubernetes cluster and a vulnerability scanner are included. This application is useful for software development teams who develop Docker containers using macOS and Windows. The Linux port of the application is relatively new, and complements Docker's CLI frontends [5].

An experimental package for Arch is provided directly by Docker; see the manual for more information. Unfortunately, it contains files which conflict with the docker-compose and docker-buildx packages, so you will first need to remove them if installed. Alternatively, you can install docker-desktopAUR package that does not conflict with existing packages.

Also, to run Docker Desktop you will need to ensure the Linux system requirements, including virtualization support via KVM. To see a tray icon under Gnome, gnome-shell-extension-appindicator will be needed.

Finally, file sharing support requires mapping user and group ids via /etc/subuid and /etc/subgid. See the Docker Desktop For Linux File Sharing instructions for more details.

Usage

Docker consists of multiple parts:

  • The Docker daemon (sometimes also called the Docker Engine), which is a process which runs as docker.service. It serves the Docker API and manages Docker containers.
  • The docker CLI command, which allows users to interact with the Docker API via the command line and control the Docker daemon.
  • Docker containers, which are namespaced processes that are started and managed by the Docker daemon as requested through the Docker API.

Typically, users use Docker by running docker CLI commands, which in turn request the Docker daemon to perform actions which in turn result in management of Docker containers. Understanding the relationship between the client (docker), server (docker.service) and containers is important to successfully administering Docker.

Note that if the Docker daemon stops or restarts, all currently running Docker containers are also stopped or restarted.

Also note that it is possible to send requests to the Docker API and control the Docker daemon without the use of the docker CLI command. See the Docker API developer documentation for more information.

See the Docker Getting Started guide for more usage documentation.

Configuration

The Docker daemon can be configured either through a configuration file at /etc/docker/daemon.json or by adding command line flags to the docker.service systemd unit. According to the Docker official documentation, the configuration file approach is preferred. If you wish to use the command line flags instead, use systemd drop-in files to override the ExecStart directive in docker.service.

For more information about options in daemon.json see dockerd documentation.

Storage driver

The storage driver controls how images and containers are stored and managed on your Docker host. The default overlay2 driver has good performance for most use cases.

Users of btrfs or ZFS may use the btrfs or zfs drivers, each of which take advantage of the unique features of these filesystems. See the btrfs driver and zfs driver documentation for more information and step-by-step instructions.

Daemon socket

By default, the Docker daemon serves the Docker API using a Unix socket at /var/run/docker.sock. This is an appropriate option for most use cases.

It is possible to configure the Daemon to additionally listen on a TCP socket, which can allow remote Docker API access from other computers. [6] This can be useful for allowing docker commands on a host machine to access the Docker daemon on a Linux virtual machine, such as an Arch virtual machine on a Windows or macOS system.

Warning: The Docker API is unencrypted and unauthenticated by default. Remote TCP access to the Docker daemon is equivalent to unsecured remote root access unless additional production using SSH or TLS is also enabled.

Note that the default docker.service file sets the -H flag by default, and Docker will not start if an option is present in both the flags and /etc/docker/daemon.json file. Therefore, the simplest way to change the socket settings is with a drop-in file, such as the following which adds a TCP socket on port 2376:

/etc/systemd/system/docker.service.d/docker.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2376

Reload the systemd daemon and restart docker.service to apply changes.

HTTP Proxies

There are two parts to configuring Docker to use an HTTP proxy: Configuring the Docker daemon and configuring Docker containers.

Docker daemon proxy configuration

See Docker documentation on configuring Docker daemon to use HTTP proxies.

Docker container proxy configuration

See Docker documentation on configuring proxies for information on how to automatically configure proxies for all containers created using the docker CLI.

Configuring DNS

See Docker's DNS documentation for the documented behavior of DNS within Docker containers and information on customizing DNS configuration. In most cases, the resolvers configured on the host are also configured in the container.

Most DNS resolvers hosted on 127.0.0.0/8 are not supported due to conflicts between the container and host network namespaces. Such resolvers are removed from the container's /etc/resolv.conf. If this would result in an empty /etc/resolv.conf, Google DNS is used instead.

Additionally, a special case is handled if 127.0.0.53 is the only configured nameserver. In this case, Docker assumes the resolver is systemd-resolved and uses the upstream DNS resolvers from /run/systemd/resolve/resolv.conf.

If you are using a service such as dnsmasq to provide a local resolver, consider adding a virtual interface with a link local IP address in the 169.254.0.0/16 block for dnsmasq to bind to instead of 127.0.0.1 to avoid the network namespace conflict.

Images location

By default, docker images are located at /var/lib/docker. They can be moved to other partitions, e.g. if you wish to use a dedicated partition or disk for your images. In this example, we will move the images to /mnt/docker.

First, stop docker.service, which will also stop all currently running containers and unmount any running images. You may then move the images from /var/lib/docker to the target destination, e.g. cp -r /var/lib/docker /mnt/docker.

Configure data-root in /etc/docker/daemon.json:

/etc/docker/daemon.json
{
  "data-root": "/mnt/docker"
}

Restart docker.service to apply changes.

Insecure registries

If you decide to use a self signed certificate for your private registries, Docker will refuse to use it until you declare that you trust it. For example, to allow images from a registry hosted at myregistry.example.com:8443, configure insecure-registries in the /etc/docker/daemon.json file:

/etc/docker/daemon.json
{
  "insecure-registries": [
    "my.registry.example.com:8443"
  ]
}

Restart docker.service to apply changes.

IPv6

In order to enable IPv6 support in Docker, you will need to do a few things. See [7] and [8] for details.

Firstly, enable the ipv6 setting in /etc/docker/daemon.json and set a specific IPv6 subnet. In this case, we will use the private fd00::/80 subnet. Make sure to use a subnet at least 80 bits as this allows a container's IPv6 to end with the container's MAC address which allows you to mitigate NDP neighbor cache invalidation issues.

/etc/docker/daemon.json
{
  "ipv6": true,
  "fixed-cidr-v6": "fd00::/80"
}

Restart docker.service to apply changes.

Finally, to let containers access the host network, you need to resolve routing issues arising from the usage of a private IPv6 subnet. Add the IPv6 NAT in order to actually get some traffic:

# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE

Now Docker should be properly IPv6 enabled. To test it, you can run:

# docker run curlimages/curl curl -v -6 archlinux.org

If you use firewalld, you can add the rule like this:

# firewall-cmd --zone=public --add-rich-rule='rule family="ipv6" destination not address="fd00::1/80" source address="fd00::/80" masquerade'

If you use ufw, you need to first enable ipv6 forwarding following Uncomplicated Firewall#Forward policy. Next you need to edit /etc/default/ufw and uncomment the following lines

/etc/ufw/sysctl.conf
net/ipv6/conf/default/forwarding=1
net/ipv6/conf/all/forwarding=1

Then you can add the iptables rule:

# ip6tables -t nat -A POSTROUTING -s fd00::/80 ! -o docker0 -j MASQUERADE

It should be noted that, for docker containers created with docker-compose, you may need to set enable_ipv6: true in the networks part for the corresponding network. Besides, you may need to configure the IPv6 subnet. See [9] for details.

User namespace isolation

By default, processes in Docker containers run within the same user namespace as the main dockerd daemon, i.e. containers are not isolated by the user_namespaces(7) feature. This allows the process within the container to access configured resources on the host according to Users and groups#Permissions and ownership. This maximizes compatibility, but poses a security risk if a container privilege escalation or breakout vulnerability is discovered that allows the container to access unintended resources on the host. (One such vulnerability was published and patched in February 2019.)

The impact of such a vulnerability can be reduced by enabling user namespace isolation. This runs each container in a separate user namespace and maps the UIDs and GIDs inside that user namespace to a different (typically unprivileged) UID/GID range on the host.

Note:
  • The main dockerd daemon still runs as root on the host. Running Docker in rootless mode is a different feature.
  • Processes in the container are started as the user defined in the USER directive in the Dockerfile used to build the image of the container.
  • All containers are mapped into the same UID/GID range. This preserves the ability to share volumes between containers.
  • Enabling user namespace isolation has several limitations.
  • Enabling user namespace isolation effectively masks existing image and container layers, as well as other Docker objects in /var/lib/docker/, because Docker needs to adjust the ownership of these resources. The upstream documentation recommends to enable this feature on a new Docker installation rather than an existing one.

Configure userns-remap in /etc/docker/daemon.json. default is a special value that will automatically create a user and group named dockremap for use with remapping.

/etc/docker/daemon.json
{
  "userns-remap": "default"
}

Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group. This example allocates a range of 65536 UIDs and GIDs starting at 165536 to the dockremap user and group.

/etc/subuid
dockremap:165536:65536
/etc/subgid
dockremap:165536:65536

Restart docker.service to apply changes.

After applying this change, all containers will run in an isolated user namespace by default. The remapping may be partially disabled on specific containers passing the --userns=host flag to the docker command. See [10] for details.

Rootless Docker daemon

Note: Docker rootless relies on the unprivileged user namespaces (CONFIG_USER_NS_UNPRIVILEGED). This is enabled by default in linux, linux-lts, and linux-zen kernels. Users of other kernels may need to enable it. This has some security implications, see Security#Sandboxing applications for details.

To run the Docker daemon itself as a regular user, install the docker-rootless-extrasAUR package.

Configure /etc/subuid and /etc/subgid with a username/group name, starting UID/GID and UID/GID range size to allocate to the remap user and group.

/etc/subuid
your_username:165536:65536
/etc/subgid
your_username:165536:65536

Enable the docker.socket user unit: this will result in docker being started using systemd's socket activation.

Finally set docker socket environment variable:

$ export DOCKER_HOST=unix://$XDG_RUNTIME_DIR/docker.sock

Enable native overlay diff engine

The factual accuracy of this article or section is disputed.

Reason: This may not be necessary on your system. Though metacopy=on redirect_dir=on is the default on Arch Linux kernels, some report those settings getting disabled during runtime. (Discuss in Talk:Docker#Native overlay diff)

By default, Docker cannot use the native overlay diff engine on Arch Linux, which makes building Docker images slow. If you frequently build images, configure the native diff engine as described in [11]:

/etc/modprobe.d/disable-overlay-redirect-dir.conf
options overlay metacopy=off redirect_dir=off

Then stop docker.service, reload the overlay module as follows:

# modprobe -r overlay
# modprobe overlay

You can then start docker.service again.

To verify, run docker info and check that Native Overlay Diff is true.

Images

Arch Linux

The following command pulls the archlinux x86_64 image. This is a stripped down version of Arch core without network, etc.

# docker pull archlinux

See also README.md.

For a full Arch base, clone the repository from above and build your own image.

$ git clone https://gitlab.archlinux.org/archlinux/archlinux-docker.git

Make sure that the devtools, fakechroot and fakeroot packages are installed.

To build the base image:

$ make image-base

Alpine Linux

Alpine Linux is a popular choice for small container images, especially for software compiled as static binaries. The following command pulls the latest Alpine Linux image:

# docker pull alpine

Alpine Linux uses the musl libc implementation instead of the glibc libc implementation used by most Linux distributions. Because Arch Linux uses glibc, there are a number of functional differences between an Arch Linux host and an Alpine Linux container that can impact the performance and correctness of software. A list of these differences is documented here.

Note that dynamically linked software built on Arch Linux (or any other system using glibc) may have bugs and performance problems when run on Alpine Linux (or any other system using a different libc). See [12], [13] and [14] for examples.

Debian

The following command pulls the latest debian image:

# docker pull debian

See the Docker Hub page for a full list of available tags, including both standard and slim versions for each Debian release.

Distroless

Google maintains distroless images which are minimal images without OS components such as package managers or shells, resulting in very small images for packaging software.

See the GitHub README for a list of images and instructions on their use with various programming languages.

Tips and tricks

Get the IP address of a running container

To grab the IP address of a running container:

$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> 
172.17.0.37

For each running container, the name and corresponding IP address can be listed for use in /etc/hosts:

#!/usr/bin/env sh
for ID in $(docker ps -q | awk '{print $1}'); do
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
    printf "%s %s\n" "$IP" "$NAME"
done

Run graphical programs inside a container

This section describes the necessary steps to allow graphical programs (including those that rely on OpenGL or Vulkan) to run on the host's X server.

First, the correct drivers, compatible with the host's graphics hardware, need to be installed inside the container. The installation procedure depends on the type of the container, but for containers based on Arch Linux images, refer to OpenGL#Installation and Vulkan#Installation for packages specific to your hardware.

Next, the container must be granted access to the host's X server. In a single-user environment, this can easily be done by running Xhost on the host system, which adds non-network local connections to the access control list:

$ xhost +local:

Lastly, the following parameters need to be passed to docker run:

  • -e "DISPLAY=$DISPLAY" sets the environment variable DISPLAY within the container to the host's display;
  • --mount type=bind,src=/tmp/.X11-unix,dst=/tmp/.X11-unix mounts the host's X server sockets inside the container under the same path;
  • --device=/dev/dri:/dev/dri gives the container access to Direct Rendering Infrastructure devices on the host.

To confirm that everything is set up correctly, run glxgears from the package mesa-utils, or vkcube from the package vulkan-tools in the container.

Start Docker Compose projects on boot

The factual accuracy of this article or section is disputed.

Reason: This is not necessary with restart: always in the compose.yml. [15] (Discuss in Talk:Docker#"Start Docker Compose projects on boot" Spurious?)

First, create a template unit for Docker Compose which is parameterized by the name of the service (see systemd.service(5) § SERVICE TEMPLATES):

/etc/systemd/system/docker-compose@.service
[Unit]
Description=%i service with docker compose
Requires=docker.service
After=docker.service

[Service]
WorkingDirectory=/opt/%i
ExecStartPre=-/usr/bin/docker compose pull
ExecStart=/usr/bin/docker compose up --remove-orphans
ExecStop=/usr/bin/docker compose down
ExecReload=/usr/bin/docker compose pull
ExecReload=/usr/bin/docker compose up --remove-orphans

[Install]
WantedBy=multi-user.target

Then, for each service you would like to run, set up a directory with the Compose file and any other required files (such as .env files) at /opt/project_name. [16]

Then, enable/start docker-compose@project_name.service.

Using buildx for cross-compiling

The buildx CLI plugin makes use of the new BuildKit building toolkit. Install the docker-buildx package. The buildx interface supports building multi-platform images, including architectures other than that of the host.

QEMU is required to cross-compile images. To setup the static build of QEMU within Docker, see the usage information for the multiarch/qemu-user-static image. Otherwise, to setup QEMU on the host system for use with Docker, see QEMU#Chrooting into arm/arm64 environment from x86_64. In either case, your system will be configured for user-mode emulation of the guest architecture.

$ docker buildx ls
NAME/NODE DRIVER/ENDPOINT STATUS  PLATFORMS
default * docker                  
  default default         running linux/amd64, linux/386, linux/arm64, linux/riscv64, linux/s390x, linux/arm/v7, linux/arm/v6

Run GPU accelerated Docker containers with NVIDIA GPUs

Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.

Install the nvidia-container-toolkit package and restart docker. You can now run containers that make use of NVIDIA GPUs using the --gpus option or by registering the NVIDIA container runtime.

With the --gpus option (recommended)

# docker run --gpus all nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

Specify how many GPUs are enabled inside a container:

# docker run --gpus 2 nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

Specify which GPUs to use:

# docker run --gpus '"device=1,2"' nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

or

# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

For more information see the documentation and install guide.

The factual accuracy of this article or section is disputed.

Reason: More information on when the following error happens is needed. It should work, see [17][dead link 2023-04-23 ⓘ]. (Discuss in Talk:Docker#GPU accelerated Docker Nvidia)

If, when using the above commands, you receive an error such as Failed to initialize NVML: Unknown Error, you can try being more specific in specifying the GPU:

# docker run --gpus all --device /dev/nvidiactl:/dev/nvidiactl --device /dev/nvidia-uvm:/dev/nvidia-uvm --device /dev/nvidia0:/dev/nvidia0 nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):

# docker run --gpus all,capabilities=utility nvidia/cuda:12.1.1-runtime-ubuntu22.04 nvidia-smi

With NVIDIA container runtime

Register the NVIDIA runtime by editing /etc/docker/daemon.json

/etc/docker/daemon.json
{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  }
}

and then restart docker.

The runtime can also be registered via a command line option to dockerd:

# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime

Afterwards GPU accelerated containers can be started with

# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi

See also README.md.

Arch Linux image with CUDA

You can use the following Dockerfile to build a custom Arch Linux image with CUDA. It uses the Dockerfile frontend syntax 1.2 to cache pacman packages on the host. The DOCKER_BUILDKIT=1 environment variable must be set on the client before building the Docker image.

Dockerfile
# syntax = docker/dockerfile:1.2

FROM archlinux

# install packages
RUN --mount=type=cache,sharing=locked,target=/var/cache/pacman \
    pacman -Syu --noconfirm --needed base base-devel cuda

# configure nvidia container runtime
# https://github.com/NVIDIA/nvidia-container-runtime#environment-variables-oci-spec
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility

Remove Docker and images

In case you want to remove Docker entirely you can do this by following the steps below:

Note: Do not just copy paste those commands without making sure you know what you are doing.

Check for running containers:

# docker ps

List all containers running on the host for deletion:

# docker ps -a

Stop a running container:

# docker stop <CONTAINER ID>

Killing still running containers:

# docker kill <CONTAINER ID>

Delete containers listed by ID:

# docker rm <CONTAINER ID>

List all Docker images:

# docker images

Delete images by ID:

# docker rmi <IMAGE ID>

Delete all images, containers, volumes, and networks that are not associated with a container (dangling):

# docker system prune

To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:

# docker system prune -a

Delete all Docker data (purge directory):

# rm -R /var/lib/docker

Troubleshooting

docker0 Bridge gets no IP / no internet access in containers when using systemd-networkd

Docker attempts to enable IP forwarding globally, but by default systemd-networkd overrides the global sysctl setting for each defined network profile. Set IPForward=yes in the network profile. See Internet sharing#Enable packet forwarding for details.

When systemd-networkd tries to manage the network interfaces created by Docker, e.g. when you configured Name=* or Type=ether in the Match section, this can lead to connectivity issues. The problem should be solved by matching interfaces more specifically, i.e. avoid using Name=* or Type=ether or other wildcard that matches an interface managed by Docker. Verify that networkctl list reports unmanaged in the SETUP column for all networks created by Docker.

Note:
  • You may need to restart docker.service each time you restart systemd-networkd.service or iptables.service.
  • Also be aware that nftables may block docker connections by default. Use nft list ruleset to check for blocking rules. nft flush chain inet filter forward removes all forwarding rules temporarily. Edit /etc/nftables.conf to make changes permanent. Remember to restart nftables.service to reload rules from the configuration file. See [18] for details about nftables support in Docker.

Default number of allowed processes/threads too low

If you run into error messages like

# e.g. Java
java.lang.OutOfMemoryError: unable to create new native thread
# e.g. C, bash, ...
fork failed: Resource temporarily unavailable

then you might need to adjust the number of processes allowed by systemd. Edit the docker.service with the following snippet:

[Service]
TasksMax=infinity

For more background, look for DefaultLimitNPROC at systemd-system.conf(5) § OPTIONS. And for TasksMax at systemd.resource-control(5) § OPTIONS.

Error initializing graphdriver: devmapper

If systemctl fails to start docker and provides an error:

Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool

Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/ (if desired), remove the contents of /var/lib/docker/, and try to start the service. See the open GitHub issue for details.

Failed to create some/path/to/file: No space left on device

If you are getting an error message like this:

ERROR: Failed to create some/path/to/file: No space left on device

when building or running a Docker image, even though you do have enough disk space available, make sure:

  • Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into /tmp but fails due to restrictions in memory usage and not disk space.
  • If you are using XFS, you might want to remove the noquota mount option from the relevant entries in /etc/fstab (usually where /tmp and/or /var/lib/docker reside). Refer to Disk quota for more information, especially if you plan on using and resizing overlay2 Docker storage driver.
  • XFS quota mount options (uquota, gquota, prjquota, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameter rootflags=. Subsequently, it should not be listed among mount options in /etc/fstab for the root (/) filesystem.
Note: There are some differences of XFS Quota compared to standard Linux Disk quota, [19] may be worth reading.

Docker-machine fails to create virtual machines using the virtualbox driver

In case docker-machine fails to create the VM's using the virtualbox driver, with the following:

VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

Simply reload the virtualbox via CLI with vboxreload.

Starting Docker breaks KVM bridged networking

The issue is that Docker's scripts add some iptables rules to block forwarding on other interfaces other than its own. This is a known issue.

Adjust the solutions below to replace br0 with your own bridge name.

Quickest fix (but turns off all Docker's iptables self-added adjustments, which you may not want):

/etc/docker/daemon.json
{
  "iptables": false
}

If there is already a network bridge configured for KVM, this may be fixable by telling docker about it. See [20] where docker configuration is modified as:

/etc/docker/daemon.json
{
  "bridge": "br0"
}

If the above does not work, or you prefer to solve the issue through iptables directly, or through a manager like UFW, add this:

iptables -I FORWARD -i br0 -o br0 -j ACCEPT

Even more detailed solutions are here.

Image pulls from Docker Hub are rate limited

Beginning on November 1st 2020, rate limiting is enabled for downloads from Docker Hub from anonymous and free accounts. See the rate limit documentation for more information.

Unauthenticated rate limits are tracked by source IP. Authenticated rate limits are tracked by account.

If you need to exceed the rate limits, you can either sign up for a paid plan or mirror the images you need to a different image registry. You can host your own registry or use a cloud hosted registry such as Amazon ECR, Google Container Registry, Azure Container Registry or Quay Container Registry.

To mirror an image, use the pull, tag and push subcommands of the Docker CLI. For example, to mirror the 1.19.3 tag of the Nginx image to a registry hosted at cr.example.com:

$ docker pull nginx:1.19.3
$ docker tag nginx:1.19.3 cr.example.com/nginx:1.19.3
$ docker push cr.example.com/nginx:1.19.3

You can then pull or run the image from the mirror:

$ docker pull cr.example.com/nginx:1.19.3
$ docker run cr.example.com/nginx:1.19.3

iptables (legacy): unknown option "--dport"

The factual accuracy of this article or section is disputed.

Reason: Nftables#Working with Docker advises to not use iptables-nft. (Discuss in Talk:Docker)

If you see this error when running a container, install iptables-nft instead of iptables (legacy) and reboot[21].

"Your password will be stored unencrypted" when running docker login

By default Docker will try to use the pass or secretservice binaries to store your registry passwords. If they are not found, it will store them in plain text (base64-encoded) in $HOME/.docker/config.json and print the following message after successfully logging in:

$ WARNING! Your password will be stored unencrypted in /home/username/.docker/config.json.

If you are using a password manager that implements the Secret Service Freedesktop DBUS API, like KDE's kwallet or GNOME's gnome-keyring, you can install the docker-credential-secretserviceAUR package to store your passwords in them.

"Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network"

Sometimes if you use a lot of Docker projects (ex. using docker-compose) it can happens that you run out of available IPs for Docker containers triggering the error:

Could not find an available, non-overlapping IPv4 address pool among the defaults to assign to the network

As found on this Docker issue, the defaults are:

Type Default Size Default Pool
local /16 172.17.0.0/12
local* /20 192.168.0.0/16

This can be easily fixed increasing the Docker IP space by configuring default-address-pools in /etc/docker/daemon.json increasing the size value from 16 to 24 on the first IP range, keeping the second one unaltered to avoid ip collision on the local network:

/etc/docker/daemon.json
{
  ...
  "default-address-pools" : [
    {
      "base" : "172.17.0.0/12",
      "size" : 24
    },
    {
      "base" : "192.168.0.0/16",
      "size" : 24
    }
  ]
}

Restart docker.service to apply changes.

More details and technical explanations can be found on the following excellent article: The definitive guide to docker's default-address-pools option.

Slow golang compilation

Due to a ulimit configuration, building a docker image and its dependances with makepkg is very slow (stuck at "Entering fakeroot environment..." step).

It is related to [22] and [23].

You can add --ulimit "nofile=1024:524288" to your docker build option or create/edit:

/etc/docker/daemon.json
{
  "default-ulimits": {
    "nofile": {
      "Name": "nofile",
      "Soft": 1024,
      "Hard": 524288
    }
  }
}

See also