As such it is one of the container runtimes that can be used with a node of a Kubernetes cluster.
Install the package.
The package will set the system up to load the
br_netfilter modules and set the following sysctl options:
net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.ipv4.ip_forward = 1
CRI-O is configured via
/etc/crio/crio.conf or via drop-in configuration files in
CRI-O can make use of container networking as provided by, or plugins installed with in-cluster deployments such as weave, flannel, calico, etc.
/opt/cni/binbut most other plugins (e.g. in-cluster deployments, kubelet managed plugins, etc) by default only install to the second directory.
CRI-O is only configured to look for plugins in the first directory and as a result, any plugins in the second directory are unavailable without some configuration changes.
This may present itself as a non-working network and in the CRI-O logs something like the following error will show:
Error validating CNI config file /etc/cni/net.d/<plugin-config-file>.conf: [failed to find plugin "<plugin>" in path [/usr/lib/cni/]]
There are 2 solutions available to resolve this; have each of the other systems changed to use
/usr/lib/cni instead or update CRI-O to use the latter directory instead of the first.
The second solution can be achieved with a drop-in configuration file in
[crio.network] plugin_dirs = [ "/opt/cni/bin/", ]
As this is an array, you can also set both or any other directories here as possible plugin locations.
Copy one of the examples from
/etc/cni/net.d and modify it as needed.
/etc/cni/net.dby default (as
199-crio-loopback.confrespectively). This may conflict with Kubernetes cluster network fabrics (weave, flannel, calico, etc) and require manual deletion to resolve this (e.g. #2411 #2885).
By default CRI-O makes use of the
overlay driver as its
storage_driver for the container storage in
/var/lib/containers/storage/. However, it can also be configured to use btrfs or ZFS natively by changing the
sed -i 's/driver = ""/driver = "btrfs"/' /etc/containers/storage.conf
crio-status like this:
# crio-status info cgroup driver: systemd storage driver: vfs storage root: /var/lib/containers/storage default GID mappings (format <container>:<host>:<size>): 0:0:4294967295 default UID mappings (format <container>:<host>:<size>): 0:0:4294967295
# crio-status config ...
Now Install the package, and see e.g. https://kubernetes.io/docs/tasks/debug-application-cluster/crictl/ or https://github.com/kubernetes-sigs/cri-tools/blob/master/docs/crictl.md, or simply:
source <(crictl completion bash)
crictl pull index.docker.io/busybox crictl pull quay.io/prometheus/busybox crictl images
curl -O https://raw.githubusercontent.com/kubernetes-sigs/cri-tools/master/docs/examples/podsandbox-config.yaml curl -O https://raw.githubusercontent.com/kubernetes-sigs/cri-tools/master/docs/examples/container-config.yaml crictl run container-config.yaml podsandbox-config.yaml
crictl logs $(crictl ps --last 1 --output yaml | yq -r .containers.id) crictl exec -it $(crictl ps --last 1 --output yaml | yq -r .containers.id) /bin/sh
crictl rm -af crictl rmp -af
Note how Docker Hub is not hard-coded, so specify container registry explicitly. (See also https://github.com/kubernetes-sigs/cri-tools/pull/718.)