Talk:Install Arch Linux on ZFS

From ArchWiki
Latest comment: 7 December 2023 by Wild Penguin in topic Why so many datasets on the root partition?

replace writehostid with zgenhostid?

Since a few versions zfs now comes with a tool to write a hostid to /etc/hostid. Should we recommend using it instead of the provided C code?

Charlesmilette (talk) 22:32, 2 January 2018 (UTC) Charles MiletteReply[reply]

base metapackage change and ALEZ script

With the changes to `base` and looking at ALEZ, it looks as though ALEZ may not be up to date with the metapackage change which could potentially lead to an unbootable system. Is there any confirmation that ALEZ still works?

Johncrist1988 (talk) 04:34, 16 October 2019 (UTC)Reply[reply]

It should be working as is since the current release from last month was before the base change, however I will update the script accordingly and release new ISO.

Johnramsden (talk) 05:07, 16 October 2019 (UTC)Reply[reply]

do not create encrypted zroot as base pool but z/root

If you follow the suggestions on encryption one ends up with not being able to use ZFS's send/recv of all child datasets, without the drawback, that a encryption key needs to be loaded for each dataset (when they are send one by one). The reason is, that the dataset zroot is created due to the zpool creation and therefor it is read only. When you want to receive a (recusive) raw-data stream (which is IMO the only option) it can not take place on the existing zroot (even with recv -F option). Since rootfs is anyhow not suggested to reside on zroot but zroot/ROOT/default, I think think a layout like an unencrypted z pool with a residing encrypted z/root filesystem would be best option that can be send away in order to clone a complete system (e.g. desktop setup).

Einsiedlerkrebs (talk) 13:25, 14 November 2022 (UTC)Reply[reply]

Update partition scheme for separate boot pool, root pool and swap partition

As the current situation stands, GRUB severely lags behind OpenZFS development. Many features need to be disabled on the pool which GRUB will boot from.

OpenZFS official guide has adopted the approach of creating separate boot pool (mounted at /boot) and root pool (mounted at /) since at least Debian Stretch days.

A separate swap partition is also recommended over swap on zvol to avoid deadlocks.

I've completed a guide for Root on ZFS here, containing the above fixes.

One might argue why don't just mount esp at /boot, here's my argument: with /boot on a ZFS pool, it is possible to select initramfs from a ZFS snapshot in GRUB, which enables recovery from a faulty initramfs. If stored in esp, there's no way to rescue a system with bad initramfs, or do a full system rollback to a previous state, as contents in /boot is not protected by ZFS.

rozb3-pacAUR relies on separate boot and root pools to perform a full system rollback.

M0p (talk) 16:38, 24 December 2020 (UTC)Reply[reply]

Add bootloader method ZfsBootMenu


Since I broke my grub-compatibility by accidentally enabling a zfs feature which is not supported by grub, I looked for alternative methods to boot a ZFS root pool I found zfsbootmenu which is IMHO the best way to boot a ZFS (EFI only?):

  • Just drop the zfsbootmenu.EFI file from in a EFI subdirectory on your EFI Partition and configure your NVRAM with efibootmgr to use that file
  • You now get a menu which finds all pools and boot environments, provides an emergency shell (with zfs tools) and a nice menu which allows for cloning/booting older snapshots and much more
  • It works with probably every ZFS feature because it boots a complete kernel with ZOL enabled.

I am new to writing wiki entries and I don't have the rights to modify this article, (which is probably a good idea) but maybe any of you arch-zfs-enthusiasts can look into zfsbootmenu and confirm that it is what we are all looking for.

More detailed instructions (may be used to include in the wiki):

Install ZFS Boot Menu

#mount your EFI Partition
mount /dev/sdXY /efi
#create subfolder for ZBM
mkdir -p /efi/EFI/zbm
#download the latest release from
wget -O /efi/EFI/zbm/zfsbootmenu.EFI
#add entry to your bootmenu
efibootmgr --disk /dev/sdX --part Y --create --label "ZFSBootMenu" --loader '\EFI\zbm\zfsbootmenu.EFI' --unicode "spl_hostid=$(hostid) zbm.timeout=3 zbm.prefer=zroot zbm.import_policy=hostid" --verbose

you can probably also just name the file /efi/EFI/BOOT/BOOTX64.EFI on an EFI partition of an USB stick in order to make it bootable without a custom NVRAM entry (

Configure ZFS Boot Menu In order to keep your current kernel parameters, you should set them as zfs property **org.zfsbootmenu:commandline** on the bootfs, but exclude the root and bootfs parameters

#lookup your current commandline
cat /proc/cmdline
-> zfs=zroot/ROOT/arch noresume init_on_alloc=0 rw spl.spl_hostid=0xdeadbeef
#set the property without the zfs parameters
 zfs set org.zfsbootmenu:commandline="noresume init_on_alloc=0 rw spl.spl_hostid=0xdeadbeef" zroot/ROOT


  • If your ZFS root is readonly after boot this make sure you have the parameter **rw** in your commandline (you can set this interactively in the menu for one boot, but you should configure the org.zfsbootmenu:commandline property as mentioned above
  • If you are booting into an old snapshot and the system does not boot at all you probably have the enabled which mounts your current rootfs over your snapshottet rootfs. In order to avoid that make sure you set the zfs canmount property to noauto on the root filesystem (which is mounted by the kernel anyway)

—This unsigned comment is by Thomas.oster (talk) 08:38, 16 July 2021 (UTC). Please sign your posts with ~~~~!Reply[reply]

Thomas, I think you don't have rights to edit the article because you still don't have the 20 edits needed to become an autoconfirmed user. Regarding `zfsbootmenu`, that's very interesting and helpful, indeed, thank you! I don't have yet that much experience with EFI, otherwise I would've helped myself. My initial impression is that a well-tested AUR package with, among other things, proper initcpio support, may be rather helpful here, otherwise some less experienced or overly enthusiastic users might get themselves in trouble (many of us have at one point or another been guilty of "boldly testing in production", though fewer will probably openly admit to it :D).
— Kerberizer • T/C 10:47, 16 July 2021 (UTC)Reply[reply]
I could provide an AUR package, but it is not so easy, because the EFI directory is not mounted on every system and not on the same mountpoint. Also an AUR package writing to the NVRAM on installation does not feel right. However if some people using ZFS root and EFI could just test it by booting from an USB Stick containing ZFS Boot menu, we could add the proper process to the wiki as an alternative to GRUB and create a standard way with defined names and mountpoints.
Thomas.oster (talk) 06:35, 29 September 2021 (UTC)Reply[reply]
Thomas, I think that an AUR package with just the EFI binary might indeed be okay as a start, together, as you suggest, with an additional information on the wiki on how to actually use it (not unlike packages like edk2-shell).
There is something that bothers me a little bit, though, in particular if ZBM's EFI binary is used. Their latest release, for example, is built with Linux 5.12.14 and ZFS 2.1.0. The kernel version is fine for the task, I guess, but the ZFS version apparently might lag a bit behind upstream OpenZFS. Perhaps the ZBM developers will be diligent enough to publish new releases as new ZFS versions come out (latest is 2.1.1, but let's say it's okay to skip it, as it is only a patch-level release). Still, that's something that one should perhaps be mindful of; even more so if using the zfs*-git packages. Of course, problems would arise only if new and incompatible ZFS features are introduced, but in the end that's what has also been breaking Grub at different times.
This actually is the reason why for my own systems -- I've been resisting UEFI for years, but recently realized it's high time to abandon BIOS/CSM -- I've chosen an extremely simplistic and hopefully foolproof approach: systemd-boot, an EFI partition on each disk in the root pool, and a path-activated service that copies the kernel, initramfs, firmware, EFI shell, etc. to each EFI partition (mostly following some of the suggestions in systemd-boot). But, of course, I don't use boot environments, which is the main point of ZBM, and which I'm sure many people find useful or even necessary for their own tasks. On a side note, the boot process in Linux probably needs a general refresh, especially in terms of security -- it was interesting to read Poettering's opinion these days. That's quite a different topic, of course, yet I've also been thinking how ZFS would fit in such changes, which might happen one day whether we like them or not, much like systemd did. </offtopic>
As for possible testing, I might be able to do some, but since ZBM, sadly, seems to not fit that well my needs, I hope other people, who would actually use it, will step in.
— Kerberizer • T/C 11:46, 29 September 2021 (UTC)Reply[reply]
Kerberizer You are right about the ZFS version problem. That's why I created a docker image which builds the ZBM efi binary: But this still depends on the packages of void linux. They promised they will be nearly instant up to date, but it would be of cause better if that AUR package would build an EFI binary from the currently installed arch packages (similar as DKMS). I personally copy my kernel and initramfs to the EFI partition automatically and boot it directly as efi stub, but if I ever need to rollback or temporarily boot an old snapshot, I can boot zfs boot menu and just select the snapshot, which always contains the kernel and initramfs which was pesent at that time.
Thomas.oster (talk) 12:00, 29 September 2021 (UTC)Reply[reply]
Thomas, ah, it's obvious how I still lack experience with UEFI and forget that it's much easier to boot different things—from UEFI itself, from an EFI shell, etc. :D One maybe stupid question comes to mind: how much additional value has ZBM over a live ISO with ZFS? I suppose booting from a snapshot is certainly one thing that you cannot directly achieve with an ISO? (Well, you could of course rollback the snapshot and then boot, but that's more time consuming and less flexible.)
— Kerberizer • T/C 10:54, 5 October 2021 (UTC)Reply[reply]
Kerberizer first thing is you don't need to have a separete boot medium. I guess for the ISO you have to use a CD or USB Stick. Also last time I had problems, I had to download the Arch ISO, set up network and build the ZFS modules in the live system until I could import my pool. With e pre-build ISO this might not be so hard. However, the ZFS boot menu is always there (similar to grub) and despite a complete shell with zfs/zpool commands it has a nice overview of snapshots. You could also have multiple boot environments on different zfs filesystems on the same pool. So you could boot either arch or debian or whatever. I don't think it is an alternative to a rescue ISO but an alternative to grub. If your system is EFI capable (which it should be) you can just try it out by dropping the EFI file to any FAT32 formatted partition and select it in the boot-efi-file menu or launch it via efi shell. And last but not least: In the live ISO you can mount/import your ZFS. In ZFS-Boot menu you can boot it (it uses kexec to boot another kernel from a running linux kernel)
Thomas.oster (talk) 19:43, 5 October 2021 (UTC)Reply[reply]
Kerberizer and Thomas.oster FYI I've just added two zfsbootmenu packages to AUR and added instructions to the manual, one package includes the tools to generate a zbm image and the other one installs the prebuilt EFI binaries. Following your discussion I decided to add some logic to the efi binaries package so that it detects the ESP partition and installs the efi binaries there. Gardar (talk) 17:42, 4 August 2022 (UTC)Reply[reply]

Note that the AUR packages are different from the archzfs repo's

When going through these instructions, I used the AUR package instead of the archzfs repo. Which led to the inability to boot into my encrypted root. I found out in the end that this was due to a difference between the AUR package and the archzfs repo package's initcpio hooks. This should be noted.

See the difference in zfs-utils here: AUR vs. archzfs

—This unsigned comment is by Dongcarl (talk) 2022-09-08T01:50:28. Please sign your posts with ~~~~!

Can you specifically say what's different and why it didn't work? I also used the AUR package `zfs-dkms`, and I added a note that you need to add 'zfs' to MODULES. Is that true in general, or just for AUR versions? Za3k (talk) 21:14, 2 August 2023 (UTC)Reply[reply]

Make dataset scheme one simple minimal path

Right now there are about 3 sets of advice for how to create datasets and mount your root. (Legacy mount, automount, etc). Let's reduce it to just the first one presented, and add the rest as options at the end. The datasets explanations get confusing. Za3k (talk) 21:17, 2 August 2023 (UTC)Reply[reply]

RAID configuration for root?

To enable less downtime in case of a disk failure, a RAIDZ configuration might be desirable for the root and other FSes needed for operation. Perhaps the example should include a raidz ocnfiguration? Wild Penguin (talk) 19:35, 9 November 2023 (UTC)Reply[reply]

About Troubleshooting:

- I was able to boot but the system was readonly, but the system still managed to boot. I believe this may be still the same problem as the section 9.1.. The section is not clear on whether one should remove spl.spl_hostid, if it was set?

- Also, it seems like zgenhostid $(hostid) generates only one digit. I believe this might not be correct? Wild Penguin (talk) 20:35, 9 November 2023 (UTC)Reply[reply]

My first problem was really stupid - I forgot to add rw to Kernel parameters :--DD. I will add a small friendly notice later to the article just for user friendliness.
As for the hostid file, I misinterpreted. It contains a binary number of the hostid. So it is not user readable by cat. Sorry for the noise! Wild Penguin (talk) 14:39, 10 November 2023 (UTC)Reply[reply]

Why these fs options when creating the pool?

I believe a few clarifications could be useful. First, when creating the pool, why so many -O zfsprops are given, why defaults are not desireable and if/when would a user wish to change these? Looking at zfsprops man page, it is not at all clear. From later on, it seems there is a good reason to set posixacl (along with xattr=sa), but what about relatime, dnodesize, normalization=formD? Optimally, a link to some reference would be welcome instead of explaining in the page. Wild Penguin (talk) 21:58, 7 December 2023 (UTC)Reply[reply]

Why so many datasets on the root partition?

Another thing causing confusion for a newbie to zfs is, why do we need so many datasets? On a "regular" filesystem, one would usually not create partitions or separate mounts. What are the downsides of skipping their creation? Optimally, a link to some reference would be welcome instead of explaining in the page. Wild Penguin (talk) 21:58, 7 December 2023 (UTC)Reply[reply]