From ArchWiki

CRI-O is an OCI-based implementation of the Kubernetes Container Runtime Interface.

As such, it is one of the container runtimes that can be used with a node of a Kubernetes cluster.


Install the cri-o package.

The package will set the system up to load the overlay and br_netfilter modules and set the following sysctl options:

 net.bridge.bridge-nf-call-iptables = 1
 net.bridge.bridge-nf-call-ip6tables = 1
 net.ipv4.ip_forward = 1

To use CRI-O without a reboot, make sure to load the modules and configure the sysctl values accordingly.


CRI-O is configured via /etc/crio/crio.conf or via drop-in configuration files in /etc/crio/crio.conf.d/.


Plugin Installation

CRI-O can make use of container networking as provided by cni-plugins, or plugins installed with in-cluster deployments such as weave, flannel, calico, etc.

Warning: Arch installs the plugins provided by cni-plugins to both /usr/lib/cni and /opt/cni/bin, but most other plugins (e.g. in-cluster deployments, kubelet managed plugins, etc) by default only install to the second directory.

CRI-O is only configured to look for plugins in the first directory, and as a result, any plugins in the second directory are unavailable without some configuration changes.

This may present itself as a non-working network and in the CRI-O logs something like the following error will show:

Error validating CNI config file /etc/cni/net.d/<plugin-config-file>.conf: [failed to find plugin "<plugin>" in path [/usr/lib/cni/]]

There are 2 solutions available to resolve this; have each of the other systems changed to use /usr/lib/cni instead, or update CRI-O to use the latter directory instead of the first. The second solution can be achieved with a drop-in configuration file in /etc/crio/crio.conf.d/:

plugin_dirs = [

As this is an array, you can also set both or any other directories here as possible plugin locations.

Plugin Configuration

Copy one of the examples from /usr/share/doc/cri-o/examples/cni/ to /etc/cni/net.d and modify it as needed.

Warning: The cri-o package installs the 10-crio-bridge.conf and 99-loopback.conf examples to /etc/cni/net.d by default (as 100-crio-bridge.conf and 199-crio-loopback.conf respectively). This may conflict with Kubernetes cluster network fabrics (weave, flannel, calico, etc) and require manual deletion to resolve this (e.g. #2411 #2885).


By default, CRI-O makes use of the overlay driver as its storage_driver for the container storage in /var/lib/containers/storage/. However, it can also be configured to use btrfs or ZFS natively by changing the driver in /etc/containers/storage.conf:

 sed -i 's/driver = ""/driver = "btrfs"/' /etc/containers/storage.conf


CRI-O makes use of the runc container runtime by default. To use crun as the container runtime instead, add the following drop-in configuration file to /etc/crio/crio.conf.d/:

default_runtime = "crun"

runtime_path = "/usr/bin/crun"
runtime_type = "oci"
runtime_root = "/run/crun"


Start and enable the crio.service systemd unit.


Use crio-status like this:

 # crio-status info
 cgroup driver: systemd
 storage driver: vfs
 storage root: /var/lib/containers/storage
 default GID mappings (format <container>:<host>:<size>):
 default UID mappings (format <container>:<host>:<size>):


 # crio-status config

Now Install the crictl package, and see e.g. or, or simply:

 source <(crictl completion bash)
 crictl pull
 crictl pull
 crictl images
 curl -O
 curl -O
 crictl run container-config.yaml podsandbox-config.yaml
 crictl logs $(crictl ps --last 1 --output yaml | yq -r .containers[0].id)
 crictl exec -it $(crictl ps --last 1 --output yaml | yq -r .containers[0].id) /bin/sh
 crictl rm -af
 crictl rmp -af

Note how Docker Hub is not hard-coded, so specify container registry explicitly. (See also

See also