From ArchWiki
(Redirected from User:Davezerave/CRI-O)

CRI-O is an OCI-based implementation of the Kubernetes Container Runtime Interface. As such, it is one of the container runtimes that can be used with a node of a Kubernetes cluster.


Install the cri-o package.

The package will set the system up to load the overlay and br_netfilter modules and set the following sysctl options:

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1

To use CRI-O without a reboot, make sure to load the modules and configure the sysctl values accordingly.


CRI-O is configured via /etc/crio/crio.conf or via drop-in configuration files in /etc/crio/crio.conf.d/.


Plugin installation

CRI-O can make use of container networking as provided by cni-plugins, or plugins installed with in-cluster deployments such as weave, flannel, calico, etc.

Plugin directories

Arch installs the plugins provided by cni-plugins to both /usr/lib/cni and /opt/cni/bin, but most other plugins (e.g. in-cluster deployments, kubelet managed plugins, etc.) by default only install to the second directory. CRI-O is only configured to look for plugins in the first directory, and as a result, any plugins in the second directory are unavailable without some configuration changes.

This may present itself as a non-working network and an entry in the CRI-O logs similar to the following error:

Error validating CNI config file /etc/cni/net.d/<plugin-config-file>.conf: [failed to find plugin "<plugin>" in path [/usr/lib/cni/]]

There are two solutions available to resolve this: either have each of the other systems changed to use /usr/lib/cni instead, or update CRI-O to use the latter directory instead of the first. The second solution can be achieved with a drop-in configuration file:

plugin_dirs = [

As this is an array, you can also set both or any other directories here as possible plugin locations.

Plugin configuration

Copy one of the examples from /usr/share/doc/cri-o/examples/cni/ to /etc/cni/net.d and modify it as needed.


By default, CRI-O makes use of the overlay driver as its storage_driver for the container storage in /var/lib/containers/storage/. However, it can also be configured to use Btrfs or ZFS natively by changing the driver in containers-storage.conf(5):

driver = "btrfs"


The cri-o package depends on the oci-runtime virtual package, which selects crun by default using lexicographic ordering.

However, CRI-O makes use of the runc container runtime by default. Either install the runc package explicitly, or configure crun as the container runtime by adding the following drop-in configuration file:

default_runtime = "crun"

runtime_path = "/usr/bin/crun"
runtime_type = "oci"
runtime_root = "/run/crun"


Start and enable the crio.service systemd unit.


Use crio status like this:

# crio status info
cgroup driver: systemd
storage driver: overlay
storage graph root: /var/lib/containers/storage
storage image:
default GID mappings (format <container>:<host>:<size>):
default UID mappings (format <container>:<host>:<size>):


# crio status config

Now install the crictl package, and see e.g. or, or simply:

# source <(crictl completion bash)
# crictl pull
# crictl pull
# crictl images
# curl -O
# curl -O
# crictl run container-config.yaml podsandbox-config.yaml
# crictl logs $(crictl ps --last 1 --output yaml | yq -r .containers[0].id)
# crictl exec -it $(crictl ps --last 1 --output yaml | yq -r .containers[0].id) /bin/sh
# crictl rm -af
# crictl rmp -af

Note how Docker Hub is not hard-coded, so specify container registry explicitly. (See also

See also