Install Arch Linux on ZFS
This article details the steps required to install Arch Linux onto a ZFS root filesystem.
Since ZFS kernel modules are out-of-tree (i.e. not included in the mainline kernel) and Arch Linux is a rolling release distribution, there will often be brief periods when the kernel-specific packages in the external repository are not in sync with those in the Arch repositories. This can sometimes result in the ZFS modules (DKMS packages) failing to compile with the latest kernel. If you always want to use the most recent kernel packages, installing Arch on ZFS might not be ideal.
See ZFS#Installation for possible solutions.
Acquire installation medium
To install Arch Linux on ZFS, you need to use an installation medium with the ZFS modules. The easiest way would be to use an alternative iso instead (assuming you trust such ISOs). You can also use ISOs from other distribution that support ZFS such as Ubuntu or NixOS or create a custom image (see below).
Use an unofficial archiso that includes ZFS modules
An unofficial archiso exists that can be used directly, without the need to manually create an entire image or add ZFS modules once booted. Do note however that it includes only the linux-lts kernel and zfs-linux-lts module.
See r-maerz/archlinux-lts-zfs.
Use ISOs from other distros
You could also choose distros with ISO that has ZFS modules built-in since most distros should have arch-install-scripts packaged. For example both Ubuntu ISOs and NixOS ISOs should just work. Just remember to change or skip some of the steps in the installation guide such as network configuration as needed.
Embedding ZFS module into custom archiso
To build a custom archiso, see ZFS#Create an Archiso image with ZFS support.
Select boot method
Since the initrd tools and boot loaders you choose to use will affect later steps of the installation process, you should decide which combinations of them to use before proceeding with installation.
Initrd tools
By default, both dracut and mkinitcpio does not support booting from a ZFS root since they do not include the necessary kernel modules and userspace tools into the initrd. You'll need to use dracut modules or mkinitcpio hooks to make initrds that can boot from a ZFS root. The initrd tool you choose to use will in turn affect the syntax of kernel parameters/cmdlines for specifying ZFS roots.
Here are the options:
zfs hook
The zfs
hook is the only option when using the default busybox based initrd.
To configure the zfs
hook, simply add zfs
before the filesystems
hook in your mkinitcpio.conf(5)
Possible syntax of kernel parameters are:
root=zfs
, which determines the root filesystem using thebootfs
propertyroot=ZFS=<pool/dataset>
, which uses a pool or a dataset as root. When a pool is specified, the root filesystem is determined based on themountpoint
propertyzfs=auto
: same effect asroot=zfs
zfs=<pool/dataset>
: same effect asroot=ZFS=<pool/dataset>
Additionally, the following kernel parameters can be set to adjust the behavior of the initrd:
zfs_force=1
makes thezpool import
command use the-f
flagzfs_wait=<seconds>
waits for the devices to show up before runningzpool import
sd-zfs hook
The zfs
hook is not compatible with systemd based initrds. Instead you should use the sd-zfs
hook.
There are 2 choices: one shipped with zfs-utils-poscat from archlinuxcn and one shipped with mkinitcpio-sd-zfsAUR[broken link: package not found]. The former is actively maintained while the latter seems to be abandoned.
zfs-utils-poscat
To configure this hook simply add it to anywhere in the HOOKS
array of your mkinitcpio.conf
. A typical configuration could look like this:
HOOKS=(systemd sd-zfs autodetect microcode modconf kms keyboard sd-vconsole block filesystems fsck)
The supported cmdline formats are:
root=zfs
, which imports all pools in initrd, searches for the first pool with the bootfs property set, and then mounts bootfs as root.root=zfs:poolname
, which imports only the specified pool and then mounts the pool's bootfs as root.root=zfs:poolname/dataset
, which imports only the specified pool and then mounts the specified dataset as root.
mkinitcpio-sd-zfs
Refer to the github repository for documentation on configuration.
zfs module
If instead you'd like to use dracut for initrd then you should use the zfs
dracut module shipped with zfs-utilsAUR. Check the documentation https://openzfs.github.io/openzfs-docs/man/master/7/dracut.zfs.7.html for how to configure the zfs module.
Boot loaders
Since the task of importing ZFS pools, mounting the root filesystem and pivot_root
ing into the new root are all handled by the UKI or vmlinuz+initrd, there's no requirements on what bootloader you can use. Indeed, even an EFI boot stub should suffice given that the kernel parameters are configured properly depending on what tools you used for your initrd (See the above section #Initrd tools)
Using GRUB2
Grub2 is able to read ZFS filesystems, given that the pools are created with only a limited set of features enabled (see ZFS#GRUB-compatible pool creation) thus is possible to place UKI/initrd on ZFS root when using Grub2.
Partition the destination drive
Partitioning is done similar to other filesystems. See the aforementioned partitioning page or the installation guide on what layout to use and how to partition disks.
Layout supporting full system rollback
To be able to use ZFS to snapshot everything you need to rebuild UKI or vmlinuz+initrd (so that you can rollback your full system state), you can use the following partition layout:
- Do not mount anything on
/boot
, this way the vmlinuz is placed on your root, which is a ZFS filesystem. - If you use the UKI, mount ESP on
/efi
and point the UKI target to/efi/EFI/Linux/<name of image>.efi
. - If you use vmlinuz+initrd, mount ESP(UEFI) or boot partition(BIOS) on
/efi
and point the initrd target to/efi/<name of initrd>.img
. Set up a pacman hook that automatically copies vmlinuz from/boot/vmlinuz-*
to/efi/
.
To perform a rollback, just rollback your ZFS root filesystem and either regenerate your UKI or regenerate the initrd and then copy vmlinuz from /boot/vmlinuz-*
to /efi/
manually.
Set up the ZFS filesystem
Enable ZED on live CD
Since this guide assumes the usage of zfs-mount-generator(8), we need to generate the zfs-list
cache before first booting into our system. This requires:
- Enabling
zfs-zed.service
on live CD. - Creating empty files at
/etc/zfs/zfs-list.cache/<poolname>
for every pool you intended to create usingtouch
Create the root pool
See ZFS#Creating ZFS pools for detailed info. As an example, the following command creates a root pool named rpool
on the partition /dev/nvme0n1p2
and sets the altroot
property to /mnt
:
# zpool create \ -O acltype=posixacl \ -O relatime=on \ -O dnodesize=auto \ -O normalization=formD \ -O compression=zstd \ -O mountpoint=/ \ -R /mnt \ rpool /dev/nvme0n1p2
altroot
property, which is set via the -R
flag during pool creation or import to temporarily add a prefix to the mount points to avoid shadowing the live cd environmentCreate filesystems
See ZFS#Creating datasets for detailed info. Here are some considerations when choosing your dataset options and layouts:
- Most properties are inherited from parent dataset by child unless explicitly overridden.
- The default value of the
mountpoint
property of the child is<mountpoint of parent>/<name of child>
.
Install and configure Arch Linux
Follow the steps of installation guide from Installation guide#Installation to before reboot. You should probably use the linux-lts kernel instead of linux.
Install ZFS
Follow ZFS#Installation to install ZFS.
Configure ZFS
Follow ZFS#Configuration to configure ZFS related services. Note however
- We are under a chroot so don't try to start systemd services, just enable them.
- Skip all steps in ZFS#zfs-mount-generator except enabling
zfs-zed.service
. We will populate the cache later.
Set up the initrd
See #Initrd tools to configure the initrd generator of your choice. Don't forget to regenerate initrd either via mkinitcpio -P
or dracut --regenerate-all
.
Populate the zfs-list cache
Now exit the chroot. Copy /etc/zfs/zfs-list.cache
to /mnt/etc/zfs/zfs-list.cache
:
# cp -r /etc/zfs/zfs-list.cache /mnt/etc/zfs/
This provides the cache needed by zfs-mount-generator
.
Unmount, export and reboot
Unmount all mounted filesystems (assuming the altroot
is /mnt
):
# umount -R /mnt
Export all pools:
# zpool export -a
Reboot:
# reboot