Difference between revisions of "Installing Arch Linux on ZFS"

From ArchWiki
Jump to: navigation, search
(State the importance of enabling ACL support for /var/log/journalctl, otherwise Arch dies a little :()
 
(221 intermediate revisions by 44 users not shown)
Line 1: Line 1:
 
[[Category:Getting and installing Arch]]
 
[[Category:Getting and installing Arch]]
{{Article summary start}}
+
[[ja:ZFS に Arch Linux をインストール]]
{{Article summary text|This page provides basic guidelines for installing arch on ZFS root.}}
+
{{Related articles start}}
{{Article summary heading|Related}}
+
{{Related|ZFS}}
{{Article summary wiki|ZFS}}
+
{{Related|Experimenting with ZFS}}
{{Article summary wiki|ZFS on FUSE}}
+
{{Related articles end}}
{{Article summary end}}
+
This article details the steps required to install Arch Linux onto a ZFS root filesystem.
  
This tutorial will show you how to install your root partition (/) of Arch Linux on ZFS.
+
== Installation ==
  
== Installing Arch Linux on a regular filesystem ==
+
See [[ZFS#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#archzfs|archzfs]] repository.
1. Install Arch Linux to wherever you want with the default partition settings, select base-devel when selecting the packages to install, select '''[[syslinux]]''' as the bootloader, and boot into your new install. <br/>
 
2. Install some applications that we will be using:
 
# pacman -S git pbzip2
 
  
3. Install the {{AUR|spl}} and {{AUR|zfs}} modules from the [[AUR]].
+
=== Embedding archzfs into archiso ===
  
=== Configuring the environment ===
+
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.
1. Add the '''zfs''' hook to your '''HOOKS''' array in '''/etc/mkinitcpio.conf''' before '''filesystems''' and after '''sata''', and make a new initramfs with '''mkinitcpio'''.
+
 
# vim /etc/mkinitcpio.conf
+
== Partition the destination drive ==
  HOOKS="...sata zfs filesytems ..."
+
 
 
+
Review [[Partitioning]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.
  # mkinitcpio -p linux
+
 
 +
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".
 +
 
 +
Drives larger than 2TB require a GPT partition table. GRUB on BIOS/GPT configurations require a small (1~2MiB) BIOS boot partition to embed its image of boot code.
 +
 
 +
Depending upon your machine's firmware or your choice of boot mode, booting may or may not require an EFI partition. On a BIOS machine (or a UEFI machine booting in legacy mode) EFI partition is not required. Consult [[Boot loaders]] for more info.
 +
 
 +
=== Partition scheme ===
 +
 
 +
Here is an example of a basic partition scheme that could be employed for your ZFS root install on a BIOS/MBR installation using GRUB:
 +
 
 +
{{bc|<nowiki>
 +
Part    Size  Type
 +
----    ----  -------------------------
 +
  1    XXXG  Solaris Root (bf00)</nowiki>
 +
}}
 +
 
 +
Using GRUB on a BIOS (or UEFI machine in legacy boot mode) machine but using a GPT partition table:
 +
 
 +
{{bc|<nowiki>
 +
Part    Size  Type
 +
----    ----  -------------------------
 +
  1      2M  BIOS boot partition (ef02)
 +
  2    XXXG  Solaris Root (bf00)</nowiki>
 +
}}
 +
 
 +
Another example, this time using a UEFI-specific bootloader (such as [[rEFInd]]) and GPT:
 +
 
 +
{{bc|<nowiki>
 +
Part    Size  Type
 +
----    ----  -------------------------
 +
  1    100M  EFI boot partition (ef00)
 +
  2    XXXG  Solaris Root (bf00)</nowiki>
 +
}}
 +
 
 +
ZFS does not support swap files. If you require a swap partition, see [[ZFS#Swap volume]] for creating a swap ZVOL.
 +
 
 +
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}
 +
{{Warning|Several GRUB bugs ([https://savannah.gnu.org/bugs/?42861 bug #42861], [https://github.com/zfsonlinux/grub/issues/5 zfsonlinux/grub/issues/5]) complicate installing it on ZFS partitions, see [[#Install and configure the bootloader]] for a workaround}}
 +
 
 +
=== Example parted commands ===
 +
Here are some example commands to partition a drive for the second scenario above ie using BIOS/legacy boot mode with a GPT partition table and a (slighty more than) 1MB BIOS boot partition for GRUB:
 +
 
 +
# parted /dev/sdx
 +
(parted)mklabel gpt
 +
(parted)mkpart non-fs 0% 2
 +
(parted)mkpart primary 2 100%
 +
(parted)set 1 bios_grub on
 +
(parted)set 2 boot on
 +
(parted)quit
 +
 
 +
You can achieve the above in a single command like so:
 +
 
 +
parted --script /dev/sdx mklabel gpt mkpart non-fs 0% 2 mkpart primary 2 100% set 1 bios_grub on set 2 boot on
 +
 
 +
If you are creating an EFI partition then that should have the boot flag set instead of the root partition.
 +
 
 +
== Format the destination disk ==
 +
 
 +
If you have opted for a boot partition as well as any other non-ZFS system partitions then format them. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.
 +
 
 +
== Setup the ZFS filesystem ==
 +
 
 +
First, make sure the ZFS modules are loaded,
 +
 
 +
# modprobe zfs
 +
 
 +
=== Create the root zpool ===
 +
 
 +
# zpool create -f zroot /dev/disk/by-id/''id-to-partition-partx''
 +
 
 +
{{Warning|
 +
* Always use id names when working with ZFS, otherwise import errors will occur.
 +
* The zpool command will normally activate all features. See [[ZFS#GRUB-compatible pool creation]] when using [[GRUB]].}}
 +
 
 +
=== Create your datasets ===
 +
 
 +
Instead of using conventional disk partitions, ZFS has the concept of datasets to manage your storage. Unlike disk partitions, datasets have no fixed size and allow for different attributes, such as compression, to be applied per dataset. Normal ZFS datasets are mounted automatically by ZFS whilst legacy datasets are required to be mounted using fstab or with the traditional mount command.
 +
 
 +
One of the most useful features of ZFS is boot environments. Boot environments allow you to create a bootable snapshot of your system that you can revert to at any time instantly by simply rebooting and booting from that boot environment. This can make doing system updates much safer and is also incredibly useful for developing and testing software. In order to be able to use [https://github.com/b333z/beadm beadm] to manage boot environments your datasets must be configured properly. Key to this are that you split your data directories (such as {{ic|/home}}) into datasets that are distinct from your system datasets and that you do not place data in the root of the pool as this cannot be moved afterwards.
 +
 
 +
You should always create a dataset for at least your root filesystem and in nearly all cases you will also want {{ic|/home}} to be in a separate dataset. You may decide you want your logs to persist over boot environments. If you are a running any software that stores data outside of {{ic|/home}} (such as is the case for database servers) you should structure your datasets so that the data directories of the software you want to run are separated out from the root dataset.
 +
 
 +
With these example commands, we will create a basic boot environment compatible configuration comprising of just root and {{ic|/home}} datasets with lz4 compression to save space and improve IO performance:
 +
 
 +
# zfs create -o mountpoint=none zroot/data
 +
# zfs create -o mountpoint=none zroot/ROOT
 +
# zfs create -o compression=lz4 -o mountpoint=/ zroot/ROOT/default
 +
# zfs create -o compression=lz4 -o mountpoint=/home zroot/data/home
 +
 
 +
{{Note|You will need to enable ACL support on the pool that will house {{ic|/var/log/journal}}, i.e. {{ic|1=zfs set acltype=posixacl ...}}. See [[Systemd#systemd-tmpfiles-setup.service fails to start at boot]] for more information.}}
 +
 
 +
=== Configure the root filesystem ===
 +
 
 +
If you have just created your zpool, it will be mounted in a dir at the root of your tree named after the pool (ie /zroot). If the following set commands fail, you may need to unmount any ZFS filesystems first:
 +
 
 +
# zfs umount -a
 +
 
 +
Now set the mount points of the datasets:
 +
 
 +
# zfs set mountpoint=/ zroot/ROOT/default
 +
# zfs set mountpoint=legacy zroot/data/home
 +
 
 +
{{Note|{{ic|/etc/fstab}} mounts occur before zfs mounts, so don't use zfs mountpoints on directories with subfolders configured to be mounted by {{ic|/etc/fstab}}.}}
 +
 
 +
and put them in {{ic|/etc/fstab}}
 +
{{hc|/etc/fstab|
 +
# <file system>        <dir>        <type>    <options>              <dump> <pass>
 +
zroot/ROOT/default / zfs defaults,noatime 0 0
 +
zroot/data/home /home zfs defaults,noatime 0 0}}
 +
 
 +
All legacy datasets must be listed in {{ic|/etc/fstab}} or they will not be mounted at boot.
 +
 
 +
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.
 +
 
 +
# zpool set bootfs=zroot/ROOT/default zroot
 +
 
 +
Export the pool,
 +
 
 +
# zpool export zroot
 +
 
 +
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}
 +
{{Note|This might fail if you added a swap partition. You need to turn it off with the ''swapoff'' command.}}
 +
 
 +
Finally, re-import the pool,
 +
 
 +
# zpool import -d /dev/disk/by-id -R /mnt zroot
 +
 
 +
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.
 +
If this command fails and you are asked to import your pool via its numeric ID, run {{ic|zpool import}} to
 +
find out the ID of your pool then use a command such as:
 +
{{ic|zpool import 9876543212345678910 -R /mnt zroot}}
 +
}}
 +
 
 +
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.
 +
 
 +
Be sure to bring the {{ic|zpool.cache}} file into your new system. This is required later for the ZFS daemon to start.
 +
 
 +
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
 +
 
 +
if you do not have {{ic|/etc/zfs/zpool.cache}}, create it:
 +
 
 +
# zpool set cachefile=/etc/zfs/zpool.cache zroot
 +
 
 +
== Install and configure Arch Linux ==
 +
 
 +
Follow the following steps using the [[Installation guide]]. It will be noted where special consideration must be taken for ZFSonLinux.
 +
 
 +
* First mount any legacy or non-ZFS boot or system partitions using the mount command.
 +
 
 +
* Install the base system.
 +
 
 +
* The procedure described in [[Installation guide#Fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made legacy datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:
 +
 
 +
# genfstab -U -p /mnt >> /mnt/etc/fstab
 +
 
 +
* Edit the {{ic|/etc/fstab}}:
 +
 
 +
{{Note|
 +
* If you chose to create legacy datasets for system directories, keep them in this {{ic|fstab}}!
 +
* Comment out all non-legacy datasets apart from the root dataset, the swap file and the boot/EFI partition. It is a convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.
 +
}}
 +
 
 +
* You need to add the [[Unofficial_user_repositories#archzfs|Arch ZFS]] repository to {{ic|/etc/pacman.conf}}, sign its key and [[install]] '''zfs-linux''' (or '''zfs-linux-lts''' if you are running the LTS kernel) within the arch-chroot before you can update the ramdisk with ZFS support.
 +
 
 +
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:
 +
  HOOKS="base udev autodetect modconf block keyboard zfs filesystems"
 +
 
 +
When using systemd in the initrd, you need to install {{AUR|mkinitcpio-sd-zfs}} and add the {{ic|sd-zfs}} hook after the {{ic|systemd}} hook instead of the {{ic|zfs}} hook. Keep in mind that this hook uses different kernel parameters than the default {{ic|zfs}} hook, more information can be found at the [https://github.com/dasJ/sd-zfs project page].
 +
 
 +
{{Note|
 +
* If you are using a separate dataset for {{ic|/usr}} and have followed the instructions below, you must make sure you have the {{ic|usr}} hook enabled after {{ic|zfs}}, or your system will not boot.
 +
}}
 +
 
 +
* [[Regenerate the initramfs]].
 +
 
 +
== Install and configure the bootloader ==
 +
 
 +
=== Using GRUB with BIOS and EFI motherboards ===
 +
 
 +
Install GRUB onto your disk as instructed here: [[GRUB#BIOS systems]] or [[GRUB#UEFI systems]]. The GRUB [https://www.gnu.org/software/grub/manual/grub.html#Configuration manual] provides detailed information on manually configuring the software which you can supplement with [[GRUB]] and [[GRUB/Tips and tricks]].
 +
 
 +
==== error: failed to get canonical path of ====
 +
 
 +
{{ic|grub-mkconfig}} fails to properly generate entries for systems hosted on ZFS.
 +
 
 +
# grub-mkconfig -o /boot/grub/grub.cfg
 +
/usr/bin/grub-probe: error: failed to get canonical path of `/dev/bus-Your_Disk_ID-part#'
 +
 
 +
grub-install: error: failed to get canonical path of `/dev/bus-Your_Disk_ID-part#'
 +
 
 +
To work around this you must set this environment variable: {{ic|1=ZPOOL_VDEV_NAME_PATH=1}}. For example:
 +
 
 +
  # ZPOOL_VDEV_NAME_PATH=1 grub-mkconfig -o /boot/grub/grub.cfg
 +
 
 +
 
 +
==== Booting your kernel and initrd from ZFS ====
 +
 
 +
You may skip this section if you have your kernel and initrd on a separate {{ic|/boot}} partition using something like ext4 or vfat.
 +
 
 +
Otherwise grub needs to load your kernel and initrd are from a ZFS dataset the kernel and initrd paths have to be in the following format:
  
2. Edit your fstab to only mount things not managed by ZFS (/boot, swap, cdrom?)
+
/dataset/@/actual/path 
# vim /etc/fstab
 
  
3. Add '''zfs''' to your '''DAEMONS''' array in '''/etc/rc.conf''':
+
Example with Arch installed on the root dataset:
# vim /etc/rc.conf
 
DAEMONS="syslog-ng zfs network ..."
 
  
and that's it for the setup portion.
+
{{hc|/boot/grub/grub.cfg|<nowiki>
 +
set timeout=5
 +
set default=0
  
=== Backing up Arch Linux ===
+
menuentry "Arch Linux" {
Make a temporary directory to bind '''/''' to only backup the / without any other mountpoints:
+
    search -u UUID
# mkdir /tmp/zfs
+
    linux /@/boot/vmlinuz-linux zfs=zroot rw
# mount -o bind / /tmp/zfs
+
    initrd /@/boot/initramfs-linux.img
# mount -o bind /boot /tmp/zfs/boot
+
}
# cd /tmp/zfs
+
</nowiki>}}
# tar --exclude arch-zfs.tar --exclude var/cache/pacman/pkg -cvpf arch-zfs.tar .
 
  
Compress the tarball with '''[[pbzip2]]''' (dont use [[bzip2]].. it will take a much longer time since it's single-threaded, use pbzip2 if you have a multithreaded system)
+
Example with Arch installed on a nested dataset:
# pbzip2 arch-zfs.tar
 
  
Save this file somewhere because we will be moving it to our new installation afterwards ('''arch-zfs.tar.bz2''')
+
{{hc|/boot/grub/grub.cfg|<nowiki>
 +
set timeout=5
 +
set default=0
  
== New install with ZFS as the filesystem ==
+
menuentry "Arch Linux" {
 +
    search -u UUID
 +
    linux /ROOT/default/@/boot/vmlinuz-linux zfs=zroot/ROOT/default rw
 +
    initrd /ROOT/default/@/boot/initramfs-linux.img
 +
}
 +
</nowiki>}}
  
In order to partition the system, I was using '''System Rescue CD v2.5.1'''. The reason I'm using 2.5.1 is because it's the last version that had the
+
Example with a separate non-ZFS /boot partition and Arch installed on a nested dataset:  
native ZFS on Linux modules. Any version before or later does not have them. Since the link for 2.5.1 was removed from the author's website, I recommend you
 
to use the Gentoo Live DVD 2012. http://torrents.gentoo.org/ , download the `livedvd-amd64-multilib-2012.1`. As of the release of Sabayon 9, the Sabayon liveDVD can be used for this purpose. While ZFS support exists on all editions, the most rich application suite exists in their KDE release, making it likely the most convenient for those who don't mind the added iso size.
 
  
{{Note|If anyone knows of another live cd that like sysresccd that has the ZFS on Linux modules, definitely feel free to add it here}}
+
{{hc|/boot/grub/grub.cfg|<nowiki>
 +
set timeout=5
 +
set default=0
  
=== Partitioning ===
+
menuentry "Arch Linux" {
Our system will be using GPT as the base and extlinux as the bootloader.
+
    search -u UUID
 +
    linux /vmlinuz-linux zfs=zroot/ROOT/default rw
 +
    initrd /initramfs-linux.img
 +
}
 +
</nowiki>}}
  
Let's create a directory to hold out zfs pool
+
=== Using rEFInd with UEFI motherboards ===
# mkdir /mnt/pool
 
  
The layout will look like this:
+
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.
/dev/sda1 8300 Linux FS 250M # This will be our /boot partition
 
/dev/sda2 8200 Linux Swap <YOUR_RAM * 1.5> # or w/e calculations you use
 
/dev/sda3 bf01 Rest of Disk # This will be the ZFS pool
 
  
{{Warning|You cannot put the swap inside the ZFS pool yet because it will have problems. This will probably be fixed in a future upstream version.}}
+
== Unmount and restart ==
  
# Format the '''/boot''' partition as ext4 (or ext2-3)
+
We are almost done!
  # mkfs.ext4 /dev/sda1
+
# exit
 +
# umount /mnt/boot (if you have a legacy boot partition)
 +
# zfs umount -a
 +
  # zpool export zroot
 +
Now reboot.
  
# Make the swap and turn it on
+
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}
# mkswap /dev/sda2
 
# swapon
 
  
Load up the spl/zfs modules (if needed)
+
== After the first boot ==
# modprobe spl zfs
 
  
Create ZFS Pool and Mountpoints
+
If everything went fine up to this point, your system will boot. Once.
# zpool create -o ashift=12 -o cachefile= -O normalization=formD -R /mnt/pool rpool /dev/sda3
+
For your system to be able to reboot without issues, you need to enable the {{ic|zfs.target}} to auto mount the pools and set the hostid.
# zfs create -o mountpoint=none rpool/ROOT
 
# zfs create -o mountpoint=/ rpool/ROOT/arch
 
# zfs create -o mountpoint=/home rpool/HOME
 
# zfs create -o mountpoint=/root rpool/HOME/root
 
  
Mount your '''/boot''' partition
+
For each pool you want automatically mounted execute:
  # mkdir /mnt/pool/boot
+
  # zpool set cachefile=/etc/zfs/zpool.cache <pool>
  # mount /dev/sda1 /mnt/pool/boot
+
Enable the target with [[systemd]]:
 +
  # systemctl enable zfs.target
  
Move the '''arch-zfs.tar.bz2''' file to your system. You can use scp if it's over the network, or a flash drive if you are within walking distance.
+
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}, to get your number use the {{ic|hostid}} command.
# scp /arch-zfs.tar.bz2 root@<ip_of_new_computer>:/mnt/pool
 
  
Extract the Arch backup (Preloaded with ZFS modules and configured for ZFS)
+
The other, and suggested, solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image which will copy the hostid into the initramfs image. To write the hostid file safely you need to use a small C program:
# cd /mnt/pool
 
# tar -xjpvf arch-zfs.tar.bz2 .
 
  
=== Install Extlinux bootloader ===
+
  #include <stdio.h>
Bind a few mountpoints in the chroot env before chroot so that '''[[extlinux]]''' can find the correct device
+
  #include <errno.h>
# mount --bind /proc ./proc
+
  #include <unistd.h>
  # mount --bind /dev ./dev
 
  # mount --bind /sys ./sys
 
  # env -i HOME=/root TERM=$TERM chroot . /bin/bash --login
 
 
 
# The below commands are executed within the chroot environment
 
 
   
 
   
  # mkdir /boot/extlinux
+
  int main() {
  # extlinux --install /boot/extlinux
+
    int res;
 +
    res = sethostid(gethostid());
 +
    if (res != 0) {
 +
        switch (errno) {
 +
            case EACCES:
 +
            fprintf(stderr, "Error! No permission to write the"
 +
                          " file used to store the host ID.\n"
 +
                          "Are you root?\n");
 +
            break;
 +
            case EPERM:
 +
            fprintf(stderr, "Error! The calling process's effective"
 +
                            " user or group ID is not the same as"
 +
                            " its corresponding real ID.\n");
 +
            break;
 +
            default:
 +
            fprintf(stderr, "Unknown error.\n");
 +
        }
 +
        return 1;
 +
    }
 +
    return 0;
 +
  }
 +
 
 +
Copy it, save it as {{ic|writehostid.c}} and compile it with {{ic|gcc -o writehostid writehostid.c}}, finally execute it and regenerate the initramfs image:
  
Set correct boot flags in the [[GPT]] legacy bios and then flash gptmbr
+
  # ./writehostid
  # sgdisk /dev/sda --attributes=1:set:2
+
  # mkinitcpio -p linux
  # sgdisk /dev/sda --atributes=1:show
 
# dd count=1 bs=440 conv=notrunc if=/usr/lib/syslinux/gptmbr.bin of=/dev/sda
 
  
Make an '''extlinux.conf'''
+
You can now delete the two files {{ic|writehostid.c}} and {{ic|writehostid}}. Your system should work and reboot properly now.
# cd /boot/extlinux
 
# vim extlinux.conf
 
  
Inside '''extlinux.conf''' put the following
+
== Native encryption ==
 +
{{Warning|Encryption does not exist in a stable release, yet. So do this at you own risk, since it might break.}}
 +
To use native ZFS encryption, you will need a recent enough zfs package like {{AUR|zfs-linux-git}} 0.7.0.r26 or newer and embed it into the archiso.
 +
Then just follow the normal procedure shown before with the exception that you add the following parameters when creating the dataset:
 +
# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/ROOT
 +
# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/data
 +
If you want a single passphrase for both your root and home partition, encrypt only one dataset instead:
 +
# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/encr
 +
# zfs create -o mountpoint=none zroot/encr/ROOT
 +
# zfs create -o mountpoint=none zroot/encr/data
  
PROMPT 0
+
When importing the pool use {{ic|-l}}, to decrypt all datasets
  TIMEOUT 50
+
  # zpool import -d /dev/disk/by-id -R /mnt -l zroot
UI menu.c32
+
 
+
On reboot, you will be asked for your passphrase.
MENU TITLE Boot Menu
 
DEFAULT arch
 
 
LABEL Arch
 
    MENU LABEL Arch Linux
 
    LINUX /vmlinuz-linux
 
    INITRD /initramfs-linux.img
 
    APPEND zfs=rpool/ROOT/arch zfs_force=1
 
  
and save it.
+
== See also ==
  
That's it, restart your computer and you should be inside Arch on ZFS :)!
+
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]
 +
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]
 +
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]

Latest revision as of 17:39, 9 October 2017

This article details the steps required to install Arch Linux onto a ZFS root filesystem.

Installation

See ZFS#Installation for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the archzfs repository.

Embedding archzfs into archiso

See ZFS article.

Partition the destination drive

Review Partitioning for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.

ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type bf00, or "Solaris Root".

Drives larger than 2TB require a GPT partition table. GRUB on BIOS/GPT configurations require a small (1~2MiB) BIOS boot partition to embed its image of boot code.

Depending upon your machine's firmware or your choice of boot mode, booting may or may not require an EFI partition. On a BIOS machine (or a UEFI machine booting in legacy mode) EFI partition is not required. Consult Boot loaders for more info.

Partition scheme

Here is an example of a basic partition scheme that could be employed for your ZFS root install on a BIOS/MBR installation using GRUB:

Part     Size   Type
----     ----   -------------------------
   1     XXXG   Solaris Root (bf00)

Using GRUB on a BIOS (or UEFI machine in legacy boot mode) machine but using a GPT partition table:

Part     Size   Type
----     ----   -------------------------
   1       2M   BIOS boot partition (ef02)
   2     XXXG   Solaris Root (bf00)

Another example, this time using a UEFI-specific bootloader (such as rEFInd) and GPT:

Part     Size   Type
----     ----   -------------------------
   1     100M   EFI boot partition (ef00)
   2     XXXG   Solaris Root (bf00)

ZFS does not support swap files. If you require a swap partition, see ZFS#Swap volume for creating a swap ZVOL.

Tip: Bootloaders with support for ZFS are described in #Install and configure the bootloader.
Warning: Several GRUB bugs (bug #42861, zfsonlinux/grub/issues/5) complicate installing it on ZFS partitions, see #Install and configure the bootloader for a workaround

Example parted commands

Here are some example commands to partition a drive for the second scenario above ie using BIOS/legacy boot mode with a GPT partition table and a (slighty more than) 1MB BIOS boot partition for GRUB:

# parted /dev/sdx
(parted)mklabel gpt
(parted)mkpart non-fs 0% 2
(parted)mkpart primary 2 100%
(parted)set 1 bios_grub on
(parted)set 2 boot on
(parted)quit

You can achieve the above in a single command like so:

parted --script /dev/sdx mklabel gpt mkpart non-fs 0% 2 mkpart primary 2 100% set 1 bios_grub on set 2 boot on

If you are creating an EFI partition then that should have the boot flag set instead of the root partition.

Format the destination disk

If you have opted for a boot partition as well as any other non-ZFS system partitions then format them. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.

Setup the ZFS filesystem

First, make sure the ZFS modules are loaded,

# modprobe zfs

Create the root zpool

# zpool create -f zroot /dev/disk/by-id/id-to-partition-partx
Warning:
  • Always use id names when working with ZFS, otherwise import errors will occur.
  • The zpool command will normally activate all features. See ZFS#GRUB-compatible pool creation when using GRUB.

Create your datasets

Instead of using conventional disk partitions, ZFS has the concept of datasets to manage your storage. Unlike disk partitions, datasets have no fixed size and allow for different attributes, such as compression, to be applied per dataset. Normal ZFS datasets are mounted automatically by ZFS whilst legacy datasets are required to be mounted using fstab or with the traditional mount command.

One of the most useful features of ZFS is boot environments. Boot environments allow you to create a bootable snapshot of your system that you can revert to at any time instantly by simply rebooting and booting from that boot environment. This can make doing system updates much safer and is also incredibly useful for developing and testing software. In order to be able to use beadm to manage boot environments your datasets must be configured properly. Key to this are that you split your data directories (such as /home) into datasets that are distinct from your system datasets and that you do not place data in the root of the pool as this cannot be moved afterwards.

You should always create a dataset for at least your root filesystem and in nearly all cases you will also want /home to be in a separate dataset. You may decide you want your logs to persist over boot environments. If you are a running any software that stores data outside of /home (such as is the case for database servers) you should structure your datasets so that the data directories of the software you want to run are separated out from the root dataset.

With these example commands, we will create a basic boot environment compatible configuration comprising of just root and /home datasets with lz4 compression to save space and improve IO performance:

# zfs create -o mountpoint=none zroot/data
# zfs create -o mountpoint=none zroot/ROOT
# zfs create -o compression=lz4 -o mountpoint=/ zroot/ROOT/default
# zfs create -o compression=lz4 -o mountpoint=/home zroot/data/home
Note: You will need to enable ACL support on the pool that will house /var/log/journal, i.e. zfs set acltype=posixacl .... See Systemd#systemd-tmpfiles-setup.service fails to start at boot for more information.

Configure the root filesystem

If you have just created your zpool, it will be mounted in a dir at the root of your tree named after the pool (ie /zroot). If the following set commands fail, you may need to unmount any ZFS filesystems first:

# zfs umount -a

Now set the mount points of the datasets:

# zfs set mountpoint=/ zroot/ROOT/default
# zfs set mountpoint=legacy zroot/data/home
Note: /etc/fstab mounts occur before zfs mounts, so don't use zfs mountpoints on directories with subfolders configured to be mounted by /etc/fstab.

and put them in /etc/fstab

/etc/fstab
# <file system>        <dir>         <type>    <options>              <dump> <pass>
zroot/ROOT/default / zfs defaults,noatime 0 0
zroot/data/home /home zfs defaults,noatime 0 0

All legacy datasets must be listed in /etc/fstab or they will not be mounted at boot.

Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.

# zpool set bootfs=zroot/ROOT/default zroot

Export the pool,

# zpool export zroot
Warning: Do not skip this, otherwise you will be required to use -f when importing your pools. This unloads the imported pool.
Note: This might fail if you added a swap partition. You need to turn it off with the swapoff command.

Finally, re-import the pool,

# zpool import -d /dev/disk/by-id -R /mnt zroot
Note: -d is not the actual device id, but the /dev/by-id directory containing the symbolic links.

If this command fails and you are asked to import your pool via its numeric ID, run zpool import to find out the ID of your pool then use a command such as: zpool import 9876543212345678910 -R /mnt zroot

If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.

Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.

# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache

if you do not have /etc/zfs/zpool.cache, create it:

# zpool set cachefile=/etc/zfs/zpool.cache zroot

Install and configure Arch Linux

Follow the following steps using the Installation guide. It will be noted where special consideration must be taken for ZFSonLinux.

  • First mount any legacy or non-ZFS boot or system partitions using the mount command.
  • Install the base system.
  • The procedure described in Installation guide#Fstab is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in fstab file, unless the user made legacy datasets of system directories. To generate the fstab for filesystems, use:
# genfstab -U -p /mnt >> /mnt/etc/fstab
  • Edit the /etc/fstab:
Note:
  • If you chose to create legacy datasets for system directories, keep them in this fstab!
  • Comment out all non-legacy datasets apart from the root dataset, the swap file and the boot/EFI partition. It is a convention to replace the swap's uuid with /dev/zvol/zroot/swap.
  • You need to add the Arch ZFS repository to /etc/pacman.conf, sign its key and install zfs-linux (or zfs-linux-lts if you are running the LTS kernel) within the arch-chroot before you can update the ramdisk with ZFS support.
  • When creating the initial ramdisk, first edit /etc/mkinitcpio.conf and add zfs before filesystems. Also, move keyboard hook before zfs so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your HOOKS line should look something like this:
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"

When using systemd in the initrd, you need to install mkinitcpio-sd-zfsAUR and add the sd-zfs hook after the systemd hook instead of the zfs hook. Keep in mind that this hook uses different kernel parameters than the default zfs hook, more information can be found at the project page.

Note:
  • If you are using a separate dataset for /usr and have followed the instructions below, you must make sure you have the usr hook enabled after zfs, or your system will not boot.

Install and configure the bootloader

Using GRUB with BIOS and EFI motherboards

Install GRUB onto your disk as instructed here: GRUB#BIOS systems or GRUB#UEFI systems. The GRUB manual provides detailed information on manually configuring the software which you can supplement with GRUB and GRUB/Tips and tricks.

error: failed to get canonical path of

grub-mkconfig fails to properly generate entries for systems hosted on ZFS.

# grub-mkconfig -o /boot/grub/grub.cfg
/usr/bin/grub-probe: error: failed to get canonical path of `/dev/bus-Your_Disk_ID-part#'
grub-install: error: failed to get canonical path of `/dev/bus-Your_Disk_ID-part#'

To work around this you must set this environment variable: ZPOOL_VDEV_NAME_PATH=1. For example:

# ZPOOL_VDEV_NAME_PATH=1 grub-mkconfig -o /boot/grub/grub.cfg


Booting your kernel and initrd from ZFS

You may skip this section if you have your kernel and initrd on a separate /boot partition using something like ext4 or vfat.

Otherwise grub needs to load your kernel and initrd are from a ZFS dataset the kernel and initrd paths have to be in the following format:

/dataset/@/actual/path  

Example with Arch installed on the root dataset:

/boot/grub/grub.cfg
set timeout=5
set default=0

menuentry "Arch Linux" {
    search -u UUID
    linux /@/boot/vmlinuz-linux zfs=zroot rw
    initrd /@/boot/initramfs-linux.img
}

Example with Arch installed on a nested dataset:

/boot/grub/grub.cfg
set timeout=5
set default=0

menuentry "Arch Linux" {
    search -u UUID
    linux /ROOT/default/@/boot/vmlinuz-linux zfs=zroot/ROOT/default rw 
    initrd /ROOT/default/@/boot/initramfs-linux.img
}

Example with a separate non-ZFS /boot partition and Arch installed on a nested dataset:

/boot/grub/grub.cfg
set timeout=5
set default=0

menuentry "Arch Linux" {
    search -u UUID
    linux /vmlinuz-linux zfs=zroot/ROOT/default rw
    initrd /initramfs-linux.img
}

Using rEFInd with UEFI motherboards

Use EFISTUB and rEFInd for the UEFI boot loader. The kernel parameters in refind_linux.conf for ZFS should include zfs=bootfs or zfs=zroot so the system can boot from ZFS. The root and rootfstype parameters are not needed.

Unmount and restart

We are almost done!

# exit
# umount /mnt/boot (if you have a legacy boot partition)
# zfs umount -a
# zpool export zroot

Now reboot.

Warning: If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.

After the first boot

If everything went fine up to this point, your system will boot. Once. For your system to be able to reboot without issues, you need to enable the zfs.target to auto mount the pools and set the hostid.

For each pool you want automatically mounted execute:

# zpool set cachefile=/etc/zfs/zpool.cache <pool>

Enable the target with systemd:

# systemctl enable zfs.target

When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the kernel parameters in your boot loader. For example, adding spl.spl_hostid=0x00bab10c, to get your number use the hostid command.

The other, and suggested, solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image which will copy the hostid into the initramfs image. To write the hostid file safely you need to use a small C program:

#include <stdio.h>
#include <errno.h>
#include <unistd.h>

int main() {
    int res;
    res = sethostid(gethostid());
    if (res != 0) {
        switch (errno) {
            case EACCES:
            fprintf(stderr, "Error! No permission to write the"
                         " file used to store the host ID.\n"
                         "Are you root?\n");
            break;
            case EPERM:
            fprintf(stderr, "Error! The calling process's effective"
                            " user or group ID is not the same as"
                            " its corresponding real ID.\n");
            break;
            default:
            fprintf(stderr, "Unknown error.\n");
        }
        return 1;
    }
    return 0;
}

Copy it, save it as writehostid.c and compile it with gcc -o writehostid writehostid.c, finally execute it and regenerate the initramfs image:

# ./writehostid
# mkinitcpio -p linux

You can now delete the two files writehostid.c and writehostid. Your system should work and reboot properly now.

Native encryption

Warning: Encryption does not exist in a stable release, yet. So do this at you own risk, since it might break.

To use native ZFS encryption, you will need a recent enough zfs package like zfs-linux-gitAUR 0.7.0.r26 or newer and embed it into the archiso. Then just follow the normal procedure shown before with the exception that you add the following parameters when creating the dataset:

# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/ROOT
# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/data

If you want a single passphrase for both your root and home partition, encrypt only one dataset instead:

# zfs create -o encryption=on -o keyformat=passphrase -o mountpoint=none zroot/encr
# zfs create -o mountpoint=none zroot/encr/ROOT
# zfs create -o mountpoint=none zroot/encr/data

When importing the pool use -l, to decrypt all datasets

# zpool import -d /dev/disk/by-id -R /mnt -l zroot

On reboot, you will be asked for your passphrase.

See also