Difference between revisions of "Docker"

From ArchWiki
Jump to navigation Jump to search
m (→‎Installation: fixed link)
m (Typo)
 
(84 intermediate revisions by 44 users not shown)
Line 1: Line 1:
 
[[Category:Virtualization]]
 
[[Category:Virtualization]]
 +
[[Category:Sandboxing]]
 
[[ja:Docker]]
 
[[ja:Docker]]
 
[[ru:Docker]]
 
[[ru:Docker]]
 
[[zh-hant:Docker]]
 
[[zh-hant:Docker]]
 +
[[zh-hans:Docker]]
 
{{Related articles start}}
 
{{Related articles start}}
 
{{Related|systemd-nspawn}}
 
{{Related|systemd-nspawn}}
 
{{Related|Linux Containers}}
 
{{Related|Linux Containers}}
{{Related|Lxc-systemd}}
 
 
{{Related|Vagrant}}
 
{{Related|Vagrant}}
 
{{Related articles end}}
 
{{Related articles end}}
[https://www.docker.com Docker] is a utility to pack, ship and run any application as a lightweight container.
+
[[Wikipedia:Docker (software)|Docker]] is a utility to pack, ship and run any application as a lightweight container.
  
 
== Installation ==
 
== Installation ==
 
{{Note|Docker needs the {{ic|loop}} module on first usage. The following steps may be required before starting docker:
 
# tee /etc/modules-load.d/loop.conf <<< "loop"
 
# modprobe loop
 
You may need to reboot before the module is available.
 
 
The error message from not enabling the loop module may look like this:
 
'overlay' not found as a supported filesystem on this host. Please ensure kernel is new enough and has overlay
 
}}
 
  
 
[[Install]] the {{Pkg|docker}} package or, for the development version, the {{Aur|docker-git}} package. Next [[start]] and enable {{ic|docker.service}} and verify operation:
 
[[Install]] the {{Pkg|docker}} package or, for the development version, the {{Aur|docker-git}} package. Next [[start]] and enable {{ic|docker.service}} and verify operation:
Line 26: Line 18:
 
  # docker info
 
  # docker info
  
If you want to be able to run docker as a regular user, add yourself to the {{ic|docker}} [[group]].
+
Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. [https://stackoverflow.com/questions/45692255/how-make-openvpn-work-with-docker You can also try to deconflict the networks.]
 +
 
 +
If you want to be able to run docker as a regular user, add your user to the {{ic|docker}} [[user group]].
  
 
{{Warning|Anyone added to the {{ic|docker}} group is root equivalent. More information [https://github.com/docker/docker/issues/9976 here] and [https://docs.docker.com/engine/security/security/ here].}}
 
{{Warning|Anyone added to the {{ic|docker}} group is root equivalent. More information [https://github.com/docker/docker/issues/9976 here] and [https://docs.docker.com/engine/security/security/ here].}}
 
Then re-login or to make your current user session aware of this new group, you can use:
 
 
$ newgrp docker
 
  
 
== Configuration ==
 
== Configuration ==
Line 38: Line 28:
 
=== Storage driver ===
 
=== Storage driver ===
  
The docker storage driver (or graph driver) has a huge impact on performance. Its job is to store layers of container images efficiently, that is when several images share a layer, only one layer uses disk space. The  compatible option, `devicemapper` offers suboptimal performance, which is outright terrible on rotating disks. Additionally, `devicemappper` is not recommended in production.
+
The docker storage driver (or graph driver) has a huge impact on performance. Its job is to store layers of container images efficiently, that is when several images share a layer, only one layer uses disk space. The  compatible option, `devicemapper` offers suboptimal performance, which is outright terrible on rotating disks. Additionally, `devicemapper` is not recommended in production.
  
 
As Arch linux ships new kernels, there is no point using the compatibility option. A good, modern choice is {{ic|overlay2}}.
 
As Arch linux ships new kernels, there is no point using the compatibility option. A good, modern choice is {{ic|overlay2}}.
  
To see current storage driver, run {{ic|# docker info {{!}} head}}, modern docker installation should already use {{ic|overlay2}} by default.
+
To see the current storage driver, run {{ic|# docker info {{!}} head}}; modern docker installations should already use {{ic|overlay2}} by default.
 +
 
 +
To set your own choice of storage driver, edit {{ic|/etc/docker/daemon.json}} (create it if it does not exist):
 +
{{hc|/etc/docker/daemon.json|2=
 +
{
 +
  "storage-driver": "overlay2"
 +
}
  
To set your own choice of storage driver, create a [[Drop-in snippet]] and use {{ic|-s}} option to {{ic|dockerd}} (use {{ic|systemctl edit docker}}):
 
{{hc|/etc/systemd/system/docker.service.d/override.conf|2=
 
[Service]
 
ExecStart=
 
ExecStart=/usr/bin/dockerd -H fd:// -s overlay2
 
 
}}
 
}}
  
Note that the {{ic|1=ExecStart=}} line is needed to drop inherited {{ic|ExecStart}}.
+
Afterwards, [[restart]] docker.
  
 
Further information on options is available on the [https://docs.docker.com/engine/userguide/storagedriver/selectadriver/ user guide].
 
Further information on options is available on the [https://docs.docker.com/engine/userguide/storagedriver/selectadriver/ user guide].
 +
For more information about options in {{ic|daemon.json}} see [https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-configuration-file dockerd documentation].
  
 
=== Remote API ===
 
=== Remote API ===
Line 101: Line 93:
 
Verify that the configuration has been loaded:
 
Verify that the configuration has been loaded:
  
# systemctl show docker --property Environment
+
{{hc|# systemctl show docker --property Environment|2=
Environment=HTTP_PROXY=192.168.1.1:8080 HTTPS_PROXY=192.168.1.1:8080
+
Environment=HTTP_PROXY=192.168.1.1:8080 HTTPS_PROXY=192.168.1.1:8080
 +
}}
  
 
==== Container configuration ====
 
==== Container configuration ====
Line 108: Line 101:
 
The settings in the {{ic|docker.service}} file will not translate into containers. To achieve this you must set {{ic|ENV}} variables in your {{ic|Dockerfile}} thus:
 
The settings in the {{ic|docker.service}} file will not translate into containers. To achieve this you must set {{ic|ENV}} variables in your {{ic|Dockerfile}} thus:
  
  FROM base/archlinux
+
FROM archlinux/base
  ENV http_proxy="<nowiki>http://192.168.1.1:3128</nowiki>"
+
ENV http_proxy="<nowiki>http://192.168.1.1:3128</nowiki>"
  ENV https_proxy="<nowiki>https://192.168.1.1:3128</nowiki>"
+
ENV https_proxy="<nowiki>https://192.168.1.1:3128</nowiki>"
  
 
[https://docs.docker.com/engine/reference/builder/#env Docker] provide detailed information on configuration via {{ic|ENV}} within a Dockerfile.
 
[https://docs.docker.com/engine/reference/builder/#env Docker] provide detailed information on configuration via {{ic|ENV}} within a Dockerfile.
Line 118: Line 111:
 
By default, docker will make {{ic|resolv.conf}} in the container match {{ic|/etc/resolv.conf}} on the host machine, filtering out local addresses (e.g. {{ic|127.0.0.1}}).  If this yields an empty file, then [https://developers.google.com/speed/public-dns/ Google DNS servers] are used.  If you are using a service like [[dnsmasq]] to provide name resolution, you may need to add an entry to the {{ic|/etc/resolv.conf}} for docker's network interface so that it is not filtered out.
 
By default, docker will make {{ic|resolv.conf}} in the container match {{ic|/etc/resolv.conf}} on the host machine, filtering out local addresses (e.g. {{ic|127.0.0.1}}).  If this yields an empty file, then [https://developers.google.com/speed/public-dns/ Google DNS servers] are used.  If you are using a service like [[dnsmasq]] to provide name resolution, you may need to add an entry to the {{ic|/etc/resolv.conf}} for docker's network interface so that it is not filtered out.
  
=== Running Docker with a manually-defined network ===
+
=== Running Docker with a manually-defined network on systemd-networkd ===
  
If you manually configure your network using systemd-network version '''220 or higher''', containers you start with Docker may be unable to access your network. Beginning with version 220, the forwarding setting for a given network ({{ic|net.ipv4.conf.<interface>.forwarding}}) defaults to {{ic|off}}. This setting prevents IP forwarding. It also conflicts with Docker which enables the {{ic|net.ipv4.conf.all.forwarding}} setting within a container.
+
If you manually configure your network using [[systemd-networkd]] version '''220 or higher''', containers you start with Docker may be unable to access your network. Beginning with version 220, the forwarding setting for a given network ({{ic|net.ipv4.conf.<interface>.forwarding}}) defaults to {{ic|off}}. This setting prevents IP forwarding. It also conflicts with Docker which enables the {{ic|net.ipv4.conf.all.forwarding}} setting within a container.
  
To work around this, edit the {{ic|<interface>.network}} file in {{ic|/etc/systemd/network/}} on your Docker host add the following block:
+
A workaround is to edit the {{ic|<interface>.network}} file in {{ic|/etc/systemd/network/}}, adding {{ic|1=IPForward=kernel}} on the Docker host:
  
 
{{hc|/etc/systemd/network/<interface>.network|2=
 
{{hc|/etc/systemd/network/<interface>.network|2=
Line 158: Line 151:
 
== Images ==
 
== Images ==
 
=== Arch Linux ===
 
=== Arch Linux ===
The following command pulls the [https://hub.docker.com/r/archlinux/base/ archlinux/base] x86_64 image.
+
The following command pulls the [https://hub.docker.com/r/archlinux/base/ archlinux/base] x86_64 image. This is a stripped down version of Arch core without network, etc.
  
 
  # docker pull archlinux/base
 
  # docker pull archlinux/base
  
 
See also [https://github.com/archlinux/archlinux-docker/blob/master/README.md README.md].
 
See also [https://github.com/archlinux/archlinux-docker/blob/master/README.md README.md].
 +
 +
For a full Arch base, clone the repo from above and build your own image.
 +
 +
$ git clone https://github.com/archlinux/archlinux-docker.git
 +
 +
Edit the {{ic|packages}} file so it only contains 'base'. Then run:
 +
 +
# make docker-image
  
 
=== Debian ===
 
=== Debian ===
Line 178: Line 179:
 
  # docker run -t -i --rm debian /bin/bash
 
  # docker run -t -i --rm debian /bin/bash
  
== Arch Linux image with snapshot repository ==
+
== Remove Docker and images ==
Arch Linux on Docker can become problematic when multiple images are created and updated each having different package versions. To keep Docker containers with consistent package versions, an unofficial [https://registry.hub.docker.com/u/pritunl/archlinux/ Docker image with a snapshot repository] is available. This allows installing new packages from the official repository as it was on the day that the snapshot was created.
 
 
 
$ docker pull pritunl/archlinux:latest
 
$ docker run --rm -t -i pritunl/archlinux:latest /bin/bash
 
 
 
Alternatively, you could use [[Arch Linux Archive]] by freezing  {{ic|/etc/pacman.d/mirrorlist}}
 
  Server=https://archive.archlinux.org/repos/2020/01/02/$repo/os/$arch
 
 
 
== Clean Remove Docker + Images ==
 
  
 
In case you want to remove Docker entirely you can do this by following  the steps below:
 
In case you want to remove Docker entirely you can do this by following  the steps below:
Line 220: Line 212:
  
 
  # docker rmi <IMAGE ID>
 
  # docker rmi <IMAGE ID>
 +
 +
Delete all images, containers, volumes, and networks that are not associated with a container (dangling):
 +
 +
# docker system prune
 +
 +
To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:
 +
 +
# docker system prune -a
  
 
Delete all Docker data (purge directory):
 
Delete all Docker data (purge directory):
 +
 +
{{Accuracy|Doing # rm -R /var/lib/docker will left behind the btrfs subvolumes of the removed containers}}
  
 
  # rm -R /var/lib/docker
 
  # rm -R /var/lib/docker
 +
 +
== Run GPU accelerated Docker containers with NVIDIA GPUs ==
 +
 +
=== With NVIDIA Container Toolkit (recommended) ===
 +
 +
Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. [https://github.com/NVIDIA/nvidia-docker NVIDIA Container Toolkit] is the recommended way of running containers that leverage NVIDIA GPUs.
 +
 +
Install the {{aur|nvidia-container-toolkit}} package. Next, [[restart]] docker. You can now run containers that make use of NVIDIA GPUs using the {{ic|--gpus}} option:
 +
 +
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
 +
 +
Specify how many GPUs are enabled inside a container:
 +
 +
# docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi
 +
 +
Specify which GPUs to use:
 +
 +
# docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi
 +
 +
or
 +
 +
# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi
 +
 +
Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):
 +
 +
# docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi
 +
 +
For more information see [https://github.com/NVIDIA/nvidia-docker/blob/master/README.md README.md] and [https://github.com/NVIDIA/nvidia-docker/wiki Wiki].
 +
 +
=== With NVIDIA Container Runtime ===
 +
 +
Install the {{aur|nvidia-container-runtime}} package. Next, register the NVIDIA runtime by editing {{ic|/etc/docker/daemon.json}}
 +
 +
{{hc|/etc/docker/daemon.json|2=
 +
{
 +
  "runtimes": {
 +
    "nvidia": {
 +
      "path": "/usr/bin/nvidia-container-runtime",
 +
      "runtimeArgs": []
 +
    }
 +
  }
 +
}
 +
}}
 +
 +
and then [[restart]] docker.
 +
 +
The runtime can also be registered via a command line option to ''dockerd'':
 +
 +
# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime
 +
 +
Afterwards GPU accelerated containers can be started with
 +
 +
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
 +
 +
or (required Docker version 19.03 or higher)
 +
 +
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
 +
 +
See also [https://github.com/NVIDIA/nvidia-container-runtime/blob/master/README.md README.md].
 +
 +
=== With nvidia-docker (deprecated) ===
 +
 +
[https://nvidia.github.io/nvidia-docker/ nvidia-docker] is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the ''nvidia-docker'' command.
 +
 +
To use nvidia-docker, install the {{aur|nvidia-docker}} package and then [[restart]] docker. Containers with NVIDIA GPU support can then be run using any of the following methods:
 +
 +
# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
 +
 +
# nvidia-docker run nvidia/cuda:9.0-base nvidia-smi
 +
 +
or (required Docker version 19.03 or higher)
 +
 +
# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
 +
 +
{{Note|nvidia-docker is a legacy method for running NVIDIA GPU accelerated containers used prior to Docker 19.03 and has been deprecated. If you are using Docker version 19.03 or higher, it is recommended to use [[#With NVIDIA Container Toolkit (recommended)|NVIDIA Container Toolkit]] instead.}}
  
 
== Useful tips ==
 
== Useful tips ==
Line 229: Line 306:
 
To grab the IP address of a running container:
 
To grab the IP address of a running container:
  
{{hc|<nowiki>$ docker inspect --format '{{ .NetworkSettings.IPAddress }}' <container-name OR id> </nowiki>|
+
{{hc|<nowiki>$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> </nowiki>|
 
172.17.0.37}}
 
172.17.0.37}}
  
== Troubleshooting ==
+
For each running container, the name and corresponding IP address can be listed for use in {{ic|/etc/hosts}}:
=== Cannot start a container with systemd 232 ===
 
Append {{ic|1=systemd.legacy_systemd_cgroup_controller=yes}} as [[kernel parameter]], see [https://github.com/opencontainers/runc/issues/1175 bug report] for details.
 
 
 
=== Deleting Docker Images in a BTRFS Filesystem ===
 
Deleting docker images in a [[btrfs]] filesystem leaves the images in {{ic|/var/lib/docker/btrfs/subvolumes/}} with a size of 0. When you try to delete this you get a permission error.
 
  # docker rm bab4ff309870
 
  # rm -Rf /var/lib/docker/btrfs/subvolumes/*
 
  rm: cannot remove '/var/lib/docker/btrfs/subvolumes/85122f1472a76b7519ed0095637d8501f1d456787be1a87f2e9e02792c4200ab': Operation not permitted
 
  
This is caused by btrfs which created subvolumes for the docker images. So the correct command to delete them is:
+
{{bc|#!/usr/bin/env sh
  # btrfs subvolume delete /var/lib/docker/btrfs/subvolumes/85122f1472a76b7519ed0095637d8501f1d456787be1a87f2e9e02792c4200ab
+
<nowiki>for ID in $(docker ps -q | awk '{print $1}'); do
 +
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
 +
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
 +
    printf "%s %s\n" "$IP" "$NAME"
 +
done</nowiki>}}
  
 +
== Troubleshooting ==
 
=== docker0 Bridge gets no IP / no internet access in containers ===
 
=== docker0 Bridge gets no IP / no internet access in containers ===
  
Docker enables IP forwarding by itself, but by default systemd overrides the respective sysctl setting. The following disables this override (for all interfaces):
+
Docker enables IP forwarding by itself, but by default [[systemd-networkd]] overrides the respective sysctl setting. Set {{ic|1=IPForward=yes}} in the network profile. See [[Internet sharing#Enable packet forwarding]] for details.
# cat > /etc/systemd/network/ipforward.network <<EOF
 
[Network]
 
IPForward=kernel
 
EOF
 
 
# cat > /etc/sysctl.d/99-docker.conf <<EOF
 
net.ipv4.ip_forward = 1
 
EOF
 
 
# sysctl -w net.ipv4.ip_forward=1
 
  
{{Accuracy|Add a reference/bug-report link to the following note.}}
+
{{Note|
 
+
* You may need to [[restart]] {{ic|docker.service}} each time you [[restart]] {{ic|systemd-networkd.service}} or {{ic|iptables.service}}.
{{Note|It has been observed that with systemd version 220 creating this file causes bridges used by Docker to lose their IP addresses. Running Docker with a manually-defined network, as described above, is known to work.}}
+
* Also be aware that [[nftables]] may block docker connections by default. Use {{ic|nft list ruleset}} to check for blocking rules. {{ic|nft flush chain inet filter forward}} removes all forwarding rules temporarily. Edit {{ic|/etc/nftables.conf}} to make changes permanent. Remember to [[restart]] {{ic|nftables.service}} to reload rules from the config file.
 
+
}}
Finally [[restart]] the {{ic|systemd-networkd}} and {{ic|docker}} services.
 
  
 
=== Default number of allowed processes/threads too low ===
 
=== Default number of allowed processes/threads too low ===
Line 273: Line 336:
 
  # e.g. C, bash, ...
 
  # e.g. C, bash, ...
 
  fork failed: Resource temporarily unavailable
 
  fork failed: Resource temporarily unavailable
 
 
then you might need to adjust the number of processes allowed by systemd. Default (see system.conf) is 500, which is pretty small for running several docker containers. You need to create a drop-in service file for this:
 
  
# mkdir /etc/systemd/system/docker.service.d
+
then you might need to adjust the number of processes allowed by systemd. The default is 500 (see {{ic|system.conf}}), which is pretty small for running several docker containers. [[Edit]] the {{ic|docker.service}} with the following snippet:
# cat > /etc/systemd/system/docker.service.d/tasks.conf <<EOF
+
 
[Service]
+
{{hc|# systemctl edit docker.service|2=
TasksMax=infinity
+
[Service]
EOF
+
TasksMax=infinity
# systemctl daemon-reload
+
}}
# systemctl restart docker.service
 
  
 
=== Error initializing graphdriver: devmapper ===
 
=== Error initializing graphdriver: devmapper ===
  
If {{ic|systemctl}} fails to start docker and provides an error:
+
If ''systemctl'' fails to start docker and provides an error:
  
  Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool
+
Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool
  
 
Then, try the following steps to resolve the error. Stop the service, back up {{ic|/var/lib/docker/}} (if desired), remove the contents of {{ic|/var/lib/docker/}}, and try to start the service. See the open [https://github.com/docker/docker/issues/21304 GitHub issue] for details.
 
Then, try the following steps to resolve the error. Stop the service, back up {{ic|/var/lib/docker/}} (if desired), remove the contents of {{ic|/var/lib/docker/}}, and try to start the service. See the open [https://github.com/docker/docker/issues/21304 GitHub issue] for details.
 +
 +
=== Failed to create some/path/to/file: No space left on device ===
 +
If you are getting an error message like this:
 +
 +
ERROR: Failed to create some/path/to/file: No space left on device
 +
 +
when building or running a Docker image, even though you do have enough disk space available, make sure:
 +
 +
* [[Tmpfs]] is disabled or has enough memory allocation. Docker might be trying to write files into {{ic|/tmp}} but fails due to restrictions in memory usage and not disk space.
 +
* If you are using [[XFS]], you might want to remove the {{ic|noquota}} mount option from the relevant entries in {{ic|/etc/fstab}} (usually where {{ic|/tmp}} and/or {{ic|/var/lib/docker}} reside). Refer to [[Disk quota]] for more information, especially if you plan on using and resizing {{ic|overlay2}} Docker storage driver.
 +
* XFS quota mount options ({{ic|uquota}}, {{ic|gquota}}, {{ic|prjquota}}, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a [[kernel parameter]] {{ic|1=rootflags=}}. Subsequently, it should not be listed among mount options in {{ic|/etc/fstab}} for the root ({{ic|/}}) filesystem.
 +
 +
{{Note|There are some differences of XFS Quota compared to standard Linux [[Disk quota]], [http://inai.de/linux/adm_quota] may be worth reading.}}
 +
 +
=== Invalid cross-device link in kernel 4.19.1 ===
 +
 +
If commands like ''dpkg'' fail to run in docker, e.g:
 +
 +
dpkg: error: error creating new backup file '/var/lib/dpkg/status-old': Invalid cross-device link
 +
 +
Either add a {{ic|1=overlay.metacopy=N}} [[kernel parameter]] or downgrade to 4.18.x until [https://github.com/docker/for-linux/issues/480 this issue] is resolved. More info in the [https://bbs.archlinux.org/viewtopic.php?id=241866 Arch forum].
 +
 +
=== CPUACCT missing in docker with Linux-ck ===
 +
 +
In newer versions of [[Linux-ck]] ([https://aur.archlinux.org/packages/linux-ck#comment-677316 some experienced] with 4.19, 4.20 seems general), a change to the MuQSS was made that disables the {{ic|CONFIG_CGROUP_CPUACCT}} option from the kernel, which makes ''some'' usage of docker ({{ic|run}} or {{ic|build}}) to produce the following error:
 +
 +
{{hc|$ docker run --rm hello-world|docker: Error response from daemon: unable to find "cpuacct" in controller set: unknown.}}
 +
 +
This error does not seem to affect the docker daemon, just containers. Read more on [[Linux-ck#CPUACCT missing in docker]].
 +
 +
=== Docker-machine fails to create virtual machines using the virtualbox driver ===
 +
 +
In case docker-machine fails to create the VM's using the virtualbox driver, with the following:
 +
 +
VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
 +
 +
Simply reload the virtualbox via CLI with {{ic|vboxreload}}.
 +
 +
=== Starting Docker breaks KVM bridged networking ===
 +
 +
This is a [https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=865975 known issue]. You can use the following workaround:
 +
 +
{{hc|/etc/docker/daemon.json|2=
 +
{
 +
  "iptables": false
 +
}
 +
}}
  
 
== See also ==
 
== See also ==
  
 +
* [https://www.docker.com Official website]
 
* [https://docs.docker.com/engine/installation/linux/archlinux/ Arch Linux on docs.docker.com]
 
* [https://docs.docker.com/engine/installation/linux/archlinux/ Arch Linux on docs.docker.com]
* [http://opensource.com/business/14/7/docker-security-selinux Are Docker containers really secure?] — opensource.com
+
* [https://opensource.com/business/14/7/docker-security-selinux Are Docker containers really secure?] — opensource.com
* [[Wikipedia:Docker (software)]]
+
* [https://awesome-docker.netlify.com/ Awesome Docker]

Latest revision as of 20:36, 29 July 2019

Docker is a utility to pack, ship and run any application as a lightweight container.

Installation

Install the docker package or, for the development version, the docker-gitAUR package. Next start and enable docker.service and verify operation:

# docker info

Note that starting the docker service may fail if you have an active VPN connection due to IP conflicts between the VPN and Docker's bridge and overlay networks. If this is the case, try disconnecting the VPN before starting the docker service. You may reconnect the VPN immediately afterwards. You can also try to deconflict the networks.

If you want to be able to run docker as a regular user, add your user to the docker user group.

Warning: Anyone added to the docker group is root equivalent. More information here and here.

Configuration

Storage driver

The docker storage driver (or graph driver) has a huge impact on performance. Its job is to store layers of container images efficiently, that is when several images share a layer, only one layer uses disk space. The compatible option, `devicemapper` offers suboptimal performance, which is outright terrible on rotating disks. Additionally, `devicemapper` is not recommended in production.

As Arch linux ships new kernels, there is no point using the compatibility option. A good, modern choice is overlay2.

To see the current storage driver, run # docker info | head; modern docker installations should already use overlay2 by default.

To set your own choice of storage driver, edit /etc/docker/daemon.json (create it if it does not exist):

/etc/docker/daemon.json
{
  "storage-driver": "overlay2"
}

Afterwards, restart docker.

Further information on options is available on the user guide. For more information about options in daemon.json see dockerd documentation.

Remote API

To open the Remote API to port 4243 manually, run:

# /usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock

-H tcp://0.0.0.0:4243 part is for opening the Remote API.

-H unix:///var/run/docker.sock part for host machine access via terminal.

Remote API with systemd

To start the remote API with the docker daemon, create a Drop-in snippet with the following content:

/etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:4243 -H unix:///var/run/docker.sock

Daemon socket configuration

The docker daemon listens to a Unix socket by default. To listen on a specified port instead, create a Drop-in snippet with the following content:

/etc/systemd/system/docker.socket.d/socket.conf
[Socket]
ListenStream=0.0.0.0:2375

Proxies

Proxy configuration is broken down into two. First is the host configuration of the Docker daemon, second is the configuration required for your container to see your proxy.

Proxy configuration

Create a Drop-in snippet with the following content:

/etc/systemd/system/docker.service.d/proxy.conf
[Service]
Environment="HTTP_PROXY=192.168.1.1:8080"
Environment="HTTPS_PROXY=192.168.1.1:8080"
Note: This assumes 192.168.1.1 is your proxy server, do not use 127.0.0.1.

Verify that the configuration has been loaded:

# systemctl show docker --property Environment
Environment=HTTP_PROXY=192.168.1.1:8080 HTTPS_PROXY=192.168.1.1:8080

Container configuration

The settings in the docker.service file will not translate into containers. To achieve this you must set ENV variables in your Dockerfile thus:

FROM archlinux/base
ENV http_proxy="http://192.168.1.1:3128"
ENV https_proxy="https://192.168.1.1:3128"

Docker provide detailed information on configuration via ENV within a Dockerfile.

Configuring DNS

By default, docker will make resolv.conf in the container match /etc/resolv.conf on the host machine, filtering out local addresses (e.g. 127.0.0.1). If this yields an empty file, then Google DNS servers are used. If you are using a service like dnsmasq to provide name resolution, you may need to add an entry to the /etc/resolv.conf for docker's network interface so that it is not filtered out.

Running Docker with a manually-defined network on systemd-networkd

If you manually configure your network using systemd-networkd version 220 or higher, containers you start with Docker may be unable to access your network. Beginning with version 220, the forwarding setting for a given network (net.ipv4.conf.<interface>.forwarding) defaults to off. This setting prevents IP forwarding. It also conflicts with Docker which enables the net.ipv4.conf.all.forwarding setting within a container.

A workaround is to edit the <interface>.network file in /etc/systemd/network/, adding IPForward=kernel on the Docker host:

/etc/systemd/network/<interface>.network
[Network]
...
IPForward=kernel
...

This configuration allows IP forwarding from the container as expected.

Images location

By default, docker images are located at /var/lib/docker. They can be moved to other partitions. First, stop the docker.service.

If you have run the docker images, you need to make sure the images are unmounted totally. Once that is completed, you may move the images from /var/lib/docker to the target destination.

Then add a Drop-in snippet for the docker.service, adding the --data-root parameter to the ExecStart:

/etc/systemd/system/docker.service.d/docker-storage.conf
[Service]
ExecStart= 
ExecStart=/usr/bin/dockerd --data-root=/path/to/new/location/docker -H fd://

Insecure registries

If you decide to use a self signed certificate for your private registry, Docker will refuse to use it until you declare that you trust it. Add a Drop-in snippet for the docker.service, adding the --insecure-registry parameter to the dockerd:

/etc/systemd/system/docker.service.d/override.conf
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --insecure-registry my.registry.name:5000

Images

Arch Linux

The following command pulls the archlinux/base x86_64 image. This is a stripped down version of Arch core without network, etc.

# docker pull archlinux/base

See also README.md.

For a full Arch base, clone the repo from above and build your own image.

$ git clone https://github.com/archlinux/archlinux-docker.git

Edit the packages file so it only contains 'base'. Then run:

# make docker-image

Debian

The following command pulls the debian x86_64 image.

# docker pull debian

Manually

Build Debian image with debootstrap:

# mkdir jessie-chroot
# debootstrap jessie ./jessie-chroot http://http.debian.net/debian/
# cd jessie-chroot
# tar cpf - . | docker import - debian
# docker run -t -i --rm debian /bin/bash

Remove Docker and images

In case you want to remove Docker entirely you can do this by following the steps below:

Note: Do not just copy paste those commands without making sure you know what you are doing.

Check for running containers:

# docker ps

List all containers running on the host for deletion:

# docker ps -a

Stop a running container:

# docker stop <CONTAINER ID>

Killing still running containers:

# docker kill <CONTAINER ID>

Delete all containers listed by ID:

# docker rm <CONTAINER ID>

List all Docker images:

# docker images

Delete all images by ID:

# docker rmi <IMAGE ID>

Delete all images, containers, volumes, and networks that are not associated with a container (dangling):

# docker system prune

To additionally remove any stopped containers and all unused images (not just dangling ones), add the -a flag to the command:

# docker system prune -a

Delete all Docker data (purge directory):

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: Doing # rm -R /var/lib/docker will left behind the btrfs subvolumes of the removed containers (Discuss in Talk:Docker#)
# rm -R /var/lib/docker

Run GPU accelerated Docker containers with NVIDIA GPUs

With NVIDIA Container Toolkit (recommended)

Starting from Docker version 19.03, NVIDIA GPUs are natively supported as Docker devices. NVIDIA Container Toolkit is the recommended way of running containers that leverage NVIDIA GPUs.

Install the nvidia-container-toolkitAUR package. Next, restart docker. You can now run containers that make use of NVIDIA GPUs using the --gpus option:

# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

Specify how many GPUs are enabled inside a container:

# docker run --gpus 2 nvidia/cuda:9.0-base nvidia-smi

Specify which GPUs to use:

# docker run --gpus '"device=1,2"' nvidia/cuda:9.0-base nvidia-smi

or

# docker run --gpus '"device=UUID-ABCDEF,1"' nvidia/cuda:9.0-base nvidia-smi

Specify a capability (graphics, compute, ...) for the container (though this is rarely if ever used this way):

# docker run --gpus all,capabilities=utility nvidia/cuda:9.0-base nvidia-smi

For more information see README.md and Wiki.

With NVIDIA Container Runtime

Install the nvidia-container-runtimeAUR package. Next, register the NVIDIA runtime by editing /etc/docker/daemon.json

/etc/docker/daemon.json
{
  "runtimes": {
    "nvidia": {
      "path": "/usr/bin/nvidia-container-runtime",
      "runtimeArgs": []
    }
  }
}

and then restart docker.

The runtime can also be registered via a command line option to dockerd:

# /usr/bin/dockerd --add-runtime=nvidia=/usr/bin/nvidia-container-runtime

Afterwards GPU accelerated containers can be started with

# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi

or (required Docker version 19.03 or higher)

# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi

See also README.md.

With nvidia-docker (deprecated)

nvidia-docker is a wrapper around NVIDIA Container Runtime which registers the NVIDIA runtime by default and provides the nvidia-docker command.

To use nvidia-docker, install the nvidia-dockerAUR package and then restart docker. Containers with NVIDIA GPU support can then be run using any of the following methods:

# docker run --runtime=nvidia nvidia/cuda:9.0-base nvidia-smi
# nvidia-docker run nvidia/cuda:9.0-base nvidia-smi

or (required Docker version 19.03 or higher)

# docker run --gpus all nvidia/cuda:9.0-base nvidia-smi
Note: nvidia-docker is a legacy method for running NVIDIA GPU accelerated containers used prior to Docker 19.03 and has been deprecated. If you are using Docker version 19.03 or higher, it is recommended to use NVIDIA Container Toolkit instead.

Useful tips

To grab the IP address of a running container:

$ docker inspect --format='{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container-name OR id> 
172.17.0.37

For each running container, the name and corresponding IP address can be listed for use in /etc/hosts:

#!/usr/bin/env sh
for ID in $(docker ps -q | awk '{print $1}'); do
    IP=$(docker inspect --format="{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}" "$ID")
    NAME=$(docker ps | grep "$ID" | awk '{print $NF}')
    printf "%s %s\n" "$IP" "$NAME"
done

Troubleshooting

docker0 Bridge gets no IP / no internet access in containers

Docker enables IP forwarding by itself, but by default systemd-networkd overrides the respective sysctl setting. Set IPForward=yes in the network profile. See Internet sharing#Enable packet forwarding for details.

Note:
  • You may need to restart docker.service each time you restart systemd-networkd.service or iptables.service.
  • Also be aware that nftables may block docker connections by default. Use nft list ruleset to check for blocking rules. nft flush chain inet filter forward removes all forwarding rules temporarily. Edit /etc/nftables.conf to make changes permanent. Remember to restart nftables.service to reload rules from the config file.

Default number of allowed processes/threads too low

If you run into error messages like

# e.g. Java
java.lang.OutOfMemoryError: unable to create new native thread
# e.g. C, bash, ...
fork failed: Resource temporarily unavailable

then you might need to adjust the number of processes allowed by systemd. The default is 500 (see system.conf), which is pretty small for running several docker containers. Edit the docker.service with the following snippet:

# systemctl edit docker.service
[Service]
TasksMax=infinity

Error initializing graphdriver: devmapper

If systemctl fails to start docker and provides an error:

Error starting daemon: error initializing graphdriver: devmapper: Device docker-8:2-915035-pool is not a thin pool

Then, try the following steps to resolve the error. Stop the service, back up /var/lib/docker/ (if desired), remove the contents of /var/lib/docker/, and try to start the service. See the open GitHub issue for details.

Failed to create some/path/to/file: No space left on device

If you are getting an error message like this:

ERROR: Failed to create some/path/to/file: No space left on device

when building or running a Docker image, even though you do have enough disk space available, make sure:

  • Tmpfs is disabled or has enough memory allocation. Docker might be trying to write files into /tmp but fails due to restrictions in memory usage and not disk space.
  • If you are using XFS, you might want to remove the noquota mount option from the relevant entries in /etc/fstab (usually where /tmp and/or /var/lib/docker reside). Refer to Disk quota for more information, especially if you plan on using and resizing overlay2 Docker storage driver.
  • XFS quota mount options (uquota, gquota, prjquota, etc.) fail during re-mount of the file system. To enable quota for root file system, the mount option must be passed to initramfs as a kernel parameter rootflags=. Subsequently, it should not be listed among mount options in /etc/fstab for the root (/) filesystem.
Note: There are some differences of XFS Quota compared to standard Linux Disk quota, [1] may be worth reading.

Invalid cross-device link in kernel 4.19.1

If commands like dpkg fail to run in docker, e.g:

dpkg: error: error creating new backup file '/var/lib/dpkg/status-old': Invalid cross-device link

Either add a overlay.metacopy=N kernel parameter or downgrade to 4.18.x until this issue is resolved. More info in the Arch forum.

CPUACCT missing in docker with Linux-ck

In newer versions of Linux-ck (some experienced with 4.19, 4.20 seems general), a change to the MuQSS was made that disables the CONFIG_CGROUP_CPUACCT option from the kernel, which makes some usage of docker (run or build) to produce the following error:

$ docker run --rm hello-world
docker: Error response from daemon: unable to find "cpuacct" in controller set: unknown.

This error does not seem to affect the docker daemon, just containers. Read more on Linux-ck#CPUACCT missing in docker.

Docker-machine fails to create virtual machines using the virtualbox driver

In case docker-machine fails to create the VM's using the virtualbox driver, with the following:

VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory

Simply reload the virtualbox via CLI with vboxreload.

Starting Docker breaks KVM bridged networking

This is a known issue. You can use the following workaround:

/etc/docker/daemon.json
{
  "iptables": false
}

See also