Difference between revisions of "Installing Arch Linux on ZFS"

From ArchWiki
Jump to: navigation, search
(Add merge template)
m (Make clear that not both initrd hooks are needed)
 
(154 intermediate revisions by 27 users not shown)
Line 1: Line 1:
 
[[Category:Getting and installing Arch]]
 
[[Category:Getting and installing Arch]]
{{Article summary start}}
+
[[ja:ZFS に Arch Linux をインストール]]
{{Article summary text|This article describes the necessary procedures for installing Arch Linux onto a ZFS root filesystem.}}
+
{{Related articles start}}
{{Article summary heading|Related}}
+
{{Related|ZFS}}
{{Article summary wiki|ZFS}}
+
{{Related|Experimenting with ZFS}}
{{Article summary wiki|ZFS on FUSE}}
+
{{Related|ZFS on FUSE}}
{{Article summary end}}
+
{{Related articles end}}
 +
This article details the steps required to install Arch Linux onto a ZFS root filesystem.
  
{{Merge|ZFS|This article has too much duplicate installation information from the Beginners Guide.}}
+
== Installation ==
  
The [[Wikipedia:ZFS|Zettabyte File System (ZFS)]] is an advanced [[Wikipedia:Copy-on-write|copy-on-write]] filesystem designed to preserve data integrity from a multitude of possible corruption scenarios as well as provide simple administration features. ZFS makes disk administration effortless with support ZFS storage pools (zpools) and automatic mount handling. First released in 2005 for Solaris OS, ZFS has since become the flag bearer for next generation filesystems.
+
See [[ZFS#Installation]] for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the [[Unofficial user repositories#archzfs|archzfs]] repository.
  
ZFS was first developed and released by Sun (now owned by Oracle) as [[Wikipedia:Open source software|Open Source Software]] licensed under the [[Wikipedia:Common Development and Distribution License|Common Development and Distribution License]] (CDDL) which is famously [http://arstechnica.com/information-technology/2010/06/uptake-of-native-linux-zfs-port-hampered-by-license-conflict/ incompatible with the GNU Public License]. This incompatibility prevents ZFS from being merged into the mainline kernel, and generally presents some obstacles for users that want to use ZFS in Linux.
+
=== Embedding archzfs into archiso ===
  
[http://zfsonlinux.org/ ZFSonLinux.org] is a project funded by the [https://www.llnl.gov/ Lawrence Livermore National Laboratory] to develop a native Linux kernel module for its massive storage requirements and super computers.
+
See [[ZFS#Embed_the_archzfs_packages_into_an_archiso|ZFS]] article.
  
==Notes before installation==
+
=== Arch ZFS installation scripts ===
  
* This guide uses the unofficial archzfs repository hosted at http://demizerone.com/demz-repo-core. This repository is maintained by Jesus Alvarez and is signed with his PGP key: [http://pgp.mit.edu:11371/pks/lookup?op=vindex&search=0x5E1ABF240EE7A126 0EE7A126].
+
Manually installing Arch using ZFS is quite an involved undertaking but thankfully there are scripts to simplify the process such as [https://github.com/danboid/ALEZ ALEZ] and [https://bitbucket.org/avi9526/install-raidz/src install-raidz].
  
* The ZFS packages are tied to the kernel version they were built against. This means it will not be possible to perform kernel updates until new packages (or package sources) are released by the ZFS package maintainer.
+
== Partition the destination drive ==
  
==Boot from the installation media==
+
Review [[Partitioning]] for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.
  
It is a good idea make an installation media with the needed software included. Otherwise, you will need the latest archiso installation media burned to a CD or a USB key.  
+
ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type {{ic|bf00}}, or "Solaris Root".
  
To embed {{ic|zfs}} in the archiso, from an existing install, download the {{ic|archiso}} package.
+
When using GRUB as your bootloader with an MBR partition table there is no need for a BIOS boot partition. Drives larger than 2TB require a GPT partition table and you should use {{Pkg|parted}} to create the partitions for GPT. BIOS/GPT and UEFI/GPT configurations require a small (1/2MB) BIOS boot partition to store the bootloader. If you are using a UEFI-only bootloader you should use GPT.
# pacman -S archiso
+
  
Start the process:
+
Depending upon your choice of bootloader you may or may not require an EFI partition. GRUB, when installed on a BIOS machine (or a UEFI machine booting in legacy mode) using either MBR or GPT doesn't require an EFI partition. Consult [[Boot loaders]] for more info.
# cp -r /usr/share/archiso/configs/releng /root/media
+
  
Edit the {{ic|packages.x86_64}} file adding those lines:
+
=== Partition scheme ===
spl-utils
+
spl
+
zfs-utils
+
zfs
+
  
Edit the {{ic|pacman.conf}} file adding those lines (TODO, correctly embed keys in the installation media?):
+
Here is an example of a basic partition scheme that could be employed for your ZFS root install on a BIOS/MBR installation using GRUB:
[demz-repo-archiso]
+
SigLevel = Never
+
Server = <nowiki>http://demizerone.com/$repo/$arch</nowiki>
+
  
Add other packages in {{ic|packages.both}}, {{ic|packages.i686}}, or {{ic|packages.x86_64}} if needed and create the image.
+
{{bc|<nowiki>
# ./build.sh -v
+
Part    Size  Type
 +
----    ----  -------------------------
 +
  1    XXXG  Solaris Root (bf00)</nowiki>
 +
}}
  
The image will be in the {{ic|/root/media/out}} directory.
+
Using GRUB on a BIOS (or UEFI machine in legacy boot mode) machine but using a GPT partition table:
  
More informations about the process can be read in [http://kroweer.wordpress.com/2011/09/07/creating-a-custom-arch-linux-live-usb/ this guide] or in the [[Archiso]] article.
+
{{bc|<nowiki>
 
+
Part    Size  Type
If you are installing onto a UEFI system, see [[Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO]] for creating UEFI compatible installation media.
+
----    ----  -------------------------
 
+
  1      2M  BIOS boot partition (ef02)
==Setup pacman==
+
  2    XXXG  Solaris Root (bf00)</nowiki>
 +
}}
  
Activate the required network connection and then edit {{ic|/etc/pacman.d/mirrorlist}} and configure the mirrors for pacman to use.  Once that is done, edit {{ic|/etc/pacman.conf}} and add the archzfs repository:
+
Another example, this time using a UEFI-specific bootloader (such as [[rEFInd]]) and GPT:
  
{{hc|# nano /etc/pacman.conf|<nowiki>
+
{{bc|<nowiki>
[demz-repo-core]
+
Part    Size  Type
Server = http://demizerone.com/$repo/$arch</nowiki>
+
----    ----  -------------------------
 +
  1      2M  BIOS boot partition (ef02)
 +
  2    100M  EFI boot partition (ef00)
 +
  3    XXXG  Solaris Root (bf00)</nowiki>
 
}}
 
}}
  
{{Note|You should change the repo name from 'demz-repo-core' to 'demz-repo-archiso' if you are using the standard Arch ISOs to install (didn't build your own, above)}}
+
ZFS does not support swap files. If you require a swap partition, see [[ZFS#Swap volume]] for creating a swap ZVOL.
  
Next, add the archzfs maintainer's PGP key to the local trust:
+
{{Tip|Bootloaders with support for ZFS are described in [[#Install and configure the bootloader]].}}
 +
{{Warning|Several GRUB bugs ([https://savannah.gnu.org/bugs/?42861 bug #42861], [https://github.com/zfsonlinux/grub/issues/5 zfsonlinux/grub/issues/5]) complicate installing it on ZFS partitions, see [[#Install and configure the bootloader]] for a workaround}}
  
    # pacman-key -r 0EE7A126
+
=== Example parted commands ===
    # pacman-key --lsign-key 0EE7A126
+
Here are some example commands to partition a drive for the second scenario above ie using BIOS/legacy boot mode with a GPT partition table and a (slighty more than) 1MB BIOS boot partition for GRUB:
  
Finally, update the pacman databases,
+
# parted /dev/sdx
 +
(parted)mklabel gpt
 +
(parted)mkpart non-fs 0% 2
 +
(parted)mkpart primary 2 100%
 +
(parted)set 1 bios_grub on
 +
(parted)set 2 boot on
 +
(parted)quit
  
    # pacman -Syy
+
You can achieve the above in a single command like so:
  
==Install needed packages==
+
parted --script /dev/sdx mklabel gpt mkpart non-fs 0% 2 mkpart primary 2 100% set 1 bios_grub on set 2 boot on
  
This is also the best time to install your favorite text editor, otherwise nano will have to be used.
+
If you are creating an EFI partition then that should have the boot flag set instead of the root partition.
  
    # pacman -S archzfs dosfstools gptfdisk vim
+
== Format the destination disk ==
  
==Partition the destination drive==
+
If you have opted for a boot partition as well as any other non-ZFS system partitions then format them. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.
  
===UEFI systems===
+
== Setup the ZFS filesystem ==
 
+
Use the cgdisk partition utility and create a GPT partition table:
+
 
+
  Part    Size  Type
+
  ====    =====  =============
+
      1    512M  EFI (ef00)
+
      2    512M  Ext4 (8300)
+
      2    XXXG  Solaris Root (bf00)
+
 
+
{{Note|The EFI partion will be formatted to FAT32 and contain the UEFI boot loader. The Ext4 partition will contain the boot partition and kernel images.}}
+
 
+
{{Note|The filesystem type codes for cgdisk are indicated in the parenthesis after the filesystem name.}}
+
 
+
{{Warning|The EFI partition must be at least 512MB specified by the UEFI standard.}}
+
 
+
===BIOS systems===
+
 
+
  Part    Size  Type
+
  ====    =====  =============
+
      2    1007K  BIOS Boot Partition (ef02)
+
      1    512M  Ext4 (8300)
+
      3    XXXG  Solaris Root (bf00)
+
 
+
{{Note|You will have to create the ext4 partition first due to cgdisk's disk alignment policies. Start it at sector 2048 to leave room for the BIOS parition.}}
+
 
+
==Format the destination disk==
+
 
+
===UEFI systems===
+
 
+
Format the EFI partition to FAT32
+
 
+
    mkfs.vfat -F 32 /dev/sda1 -n EFIBOOT
+
 
+
Format the Ext4 boot partition
+
 
+
    mkfs.ext4 /dev/sda2 -L BOOT
+
 
+
===BIOS systems===
+
 
+
Format the Ext4 boot partition
+
   
+
    mkfs.ext4 /dev/sda1 -L BOOT
+
 
+
{{Note|The boot filesystem is {{ic|sda1}} because of the order we created the partitions}}
+
 
+
The BIOS partition does not need a filesystem.
+
 
+
==Setup the ZFS filesystem==
+
  
 
First, make sure the ZFS modules are loaded,
 
First, make sure the ZFS modules are loaded,
  
    # modprobe zfs
+
# modprobe zfs
  
===Create the root zpool===
+
=== Create the root zpool ===
  
    # zpool create zroot /dev/disk/by-id/<id-to-partition>
+
# zpool create -f zroot /dev/disk/by-id/''id-to-partition-partx''
  
{{Warning|Always use id names when working with ZFS, otherwise import errors will occur.}}
+
{{Warning|
 +
* Always use id names when working with ZFS, otherwise import errors will occur.
 +
* The zpool command will normally activate all features. See [[ZFS#GRUB-compatible pool creation]] when using [[GRUB]].}}
  
===Create necessary filesystems===
+
=== Create your datasets ===
  
If so desired, sub-filesystem mount points such as /home and /root can be created with the following commands:
+
Instead of using conventional disk partitions, ZFS has the concept of datasets to manage your storage. Unlike disk partitions, datasets have no fixed size and allow for different attributes, such as compression, to be applied per dataset. Normal ZFS datasets are mounted automatically by ZFS whilst legacy datasets are required to be mounted using fstab or with the traditional mount command.
  
    # zfs create zroot/home
+
One of the most useful features of ZFS is boot environments. Boot environments allow you to create a bootable snapshot of your system that you can revert to at any time instantly by simply rebooting and booting from that boot environment. This can make doing system updates much safer and is also incredibly useful for developing and testing software. In order to be able to use [https://github.com/b333z/beadm beadm] to manage boot environments your datasets must be configured properly. Key to this are that you split your data directories (such as {{ic|/home}}) into datasets that are distinct from your system datasets and that you do not place data in the root of the pool as this cannot be moved afterwards.
    # zfs create zroot/root
+
<!-- Taken the following Swap info from https://wiki.archlinux.org/index.php/ZFS -->
+
=== Swap partition ===
+
  
ZFS does not allow to use swapfiles, but you can use a ZFS volume as swap partition. It is importart to set the ZVOL block size to match the system page size, for x86_64 systems that is 4k.
+
You should always create a dataset for at least your root filesystem and in nearly all cases you will also want {{ic|/home}} to be in a separate dataset. You may decide you want your logs to persist over boot environments. If you are a running any software that stores data outside of {{ic|/home}} (such as is the case for database servers) you should structure your datasets so that the data directories of the software you want to run are separated out from the root dataset.
  
Create a 8gb zfs volume:
+
With these example commands, we will create a basic boot environment compatible configuration comprising of just root and {{ic|/home}} datasets with lz4 compression to save space and improve IO performance:
  
  # zfs create -V 8G -b 4K <pool>/swap
+
# zfs create -o mountpoint=none zroot/data
 +
# zfs create -o mountpoint=none zroot/ROOT
 +
# zfs create -o compression=lz4 -o mountpoint=/ zroot/ROOT/default
 +
# zfs create -o compression=lz4 -o mountpoint=/home zroot/data/home
  
Prepare it as swap partition:
+
=== Configure the root filesystem ===
  
  # mkswap /dev/zvol/<pool>/swap
+
If you have just created your zpool, it will be mounted in a dir at the root of your tree named after the pool (ie /zroot). If the following set commands fail, you may need to unmount any ZFS filesystems first:
  
Enable swap:
+
# zfs umount -a
  
  # swapon /dev/zvol/<pool>/swap
+
Now set the mount points of the datasets:
  
To make it permament you need to edit your {{ic|/mnt/etc/fstab}} after pacstraping the system:
+
# zfs set mountpoint=/ zroot/ROOT/default
 +
# zfs set mountpoint=legacy zroot/data/home
  
Add a line to {{ic|/mnt/etc/fstab}}:
+
{{Note|{{ic|/etc/fstab}} mounts occur before zfs mounts, so don't use zfs mountpoints on directories with subfolders configured to be mounted by {{ic|/etc/fstab}}.}}
  
  /dev/zvol/<pool>/swap none swap defaults 0 0
+
and put them in {{ic|/etc/fstab}}
 +
{{hc|/etc/fstab|
 +
# <file system>       <dir>        <type>    <options>              <dump> <pass>
 +
zroot/ROOT/default / zfs defaults,noatime 0 0
 +
zroot/data/home /home zfs defaults,noatime 0 0}}
  
For safety, unmount all zfs filesystems if they are mounted:
+
All legacy datasets must be listed in {{ic|/etc/fstab}} or they will not be mounted at boot.
 
+
    # zfs umount -a
+
 
+
===Configure the root filesystem===
+
 
+
Now it is time to set the mount point of the root filesystem:
+
 
+
    # zfs set mountpoint=/ zroot
+
 
+
and optionally, any sub-filesystems:
+
 
+
    # zfs set mountpoint=/home zroot/home
+
    # zfs set mountpoint=/root zroot/root
+
  
 
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.
 
Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.
  
    # zpool set bootfs=zroot zroot
+
# zpool set bootfs=zroot/ROOT/default zroot
  
 
Export the pool,
 
Export the pool,
  
    # zpool export zroot
+
# zpool export zroot
  
{{Warning|Don't skip this, otherwise you will be required to use -f when importing your pools. This unloads the imported pool.}}
+
{{Warning|Do not skip this, otherwise you will be required to use {{ic|-f}} when importing your pools. This unloads the imported pool.}}
{{Note|This might fail if you added a swap partition above. Need to turn it off with the ''swapoff'' command.}}
+
{{Note|This might fail if you added a swap partition. You need to turn it off with the ''swapoff'' command.}}
  
 
Finally, re-import the pool,
 
Finally, re-import the pool,
  
    # zpool import -d /dev/disk/by-id -R /mnt zroot
+
# zpool import -d /dev/disk/by-id -R /mnt zroot
  
{{Note|"-d" is not the actual device id, but the /dev/by-id directory containing the symlinks.}}
+
{{Note|{{ic|-d}} is not the actual device id, but the {{ic|/dev/by-id}} directory containing the symbolic links.
 +
If this command fails and you are asked to import your pool via its numeric ID, run {{ic|zpool import}} to
 +
find out the ID of your pool then use a command such as:
 +
{{ic|zpool import 9876543212345678910 -R /mnt zroot}}
 +
}}
  
 
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.
 
If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.
  
==Mount the boot partitions==
+
Be sure to bring the {{ic|zpool.cache}} file into your new system. This is required later for the ZFS daemon to start.
  
===UEFI systems===
+
# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
  
    # mkdir /mnt/boot
+
if you do not have {{ic|/etc/zfs/zpool.cache}}, create it:
    # mount /dev/sda2 /mnt/boot
+
    # mkdir /mnt/boot/efi
+
    # mount /dev/sda1 /mnt/boot/efi
+
  
===BIOS systems===
+
# zpool set cachefile=/etc/zfs/zpool.cache zroot
  
    # mkdir /mnt/boot
+
== Install and configure Arch Linux ==
    # mount /dev/sda1 /mnt/boot
+
  
==Install and configure the Arch Linux installation==
+
Follow the following steps using the [[Installation guide]]. It will be noted where special consideration must be taken for ZFSonLinux.
  
Install the base packages
+
* First mount any legacy or non-ZFS boot or system partitions using the mount command.
 
+
    # pacstrap -i /mnt base base-devel
+
  
    The other packages will be installed in the chrooted environment
+
* Install the base system.
Generate the fstab,
+
  
    # genfstab -U -p /mnt | grep boot >> /mnt/etc/fstab
+
* The procedure described in [[Installation guide#Fstab]] is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in {{ic|fstab}} file, unless the user made legacy datasets of system directories. To generate the {{ic|fstab}} for filesystems, use:
  
{{Note|ZFS auto mounts its own partitions, so we do not need ZFS partitions in fstab file.}}
+
# genfstab -U -p /mnt >> /mnt/etc/fstab
  
If installing on a UEFI system, you will need to load the efivars kernel module before chrooting into the installation:
+
* Edit the {{ic|/etc/fstab}}:
  
    # modprobe efivars
+
{{Note|
 
+
* If you chose to create legacy datasets for system directories, keep them in this {{ic|fstab}}!
Chroot into the installation
+
* Comment out all non-legacy datasets apart from the root dataset, the swap file and the boot/EFI partition. It is a convention to replace the swap's uuid with {{ic|/dev/zvol/zroot/swap}}.
 +
}}
  
    # arch-chroot /mnt /bin/bash
+
* You need to add the [[Unofficial_user_repositories#archzfs|Arch ZFS]] repository to {{ic|/etc/pacman.conf}}, sign its key and [[install]] '''zfs-linux''' (or '''zfs-linux-lts''' if you are running the LTS kernel) within the arch-chroot before you can update the ramdisk with ZFS support.
  
Next, follow the [[Beginners' Guide]] from the "Locale" section to the "Configure Pacman Section". Once done, edit {{ic|pacman.conf}}, add the archzfs repo (change it to {{ic|[demz-repo-core]}} now if you were using {{ic|[demz-repo-archiso]}} earlier), and update the pacman database,
+
* When creating the initial ramdisk, first edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your {{ic|HOOKS}} line should look something like this:
 +
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"
  
    # pacman -Syy
+
When using systemd in the initrd, you need to install {{AUR|mkinitcpio-sd-zfs}} and add the {{ic|sd-zfs}} hook after the {{ic|systemd}} hook instead of the {{ic|zfs}} hook. Keep in mind that this hook uses different kernel parameters than the default {{ic|zfs}} hook, more information can be found at the [https://github.com/dasJ/sd-zfs project page].
    # pacman -Su --ignore filesystem,bash
+
    # pacman -S bash
+
    # pacman -Su
+
   
+
    Now lets install the other needed packages.
+
    # pacman -S gnupg vim archzfs
+
  
Re-create the initramfs, edit {{ic|/etc/mkinitcpio.conf}} and add {{ic|zfs}} before filesystems. Also, move {{ic|keyboard}} hook before {{ic|zfs}} so you can type in console if something goes wrong. You may also remove fsck. Your HOOKS line should look something like this:{{hc|/etc/mkinitcpio.conf|<nowiki>
+
{{Note|
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"
+
* If you are using a separate dataset for {{ic|/usr}} and have followed the instructions below, you must make sure you have the {{ic|usr}} hook enabled after {{ic|zfs}}, or your system will not boot.
</nowiki>
+
 
}}
 
}}
Regenerate the initramfs with the command:
 
  
    # mkinitcpio -p linux
+
* [[Regenerate the initramfs]].
  
Finally, set root password and add a regular user.
+
== Install and configure the bootloader ==
  
==Setup the bootloader==
+
=== For BIOS motherboards ===
  
===UEFI systems===
+
Follow [[GRUB#BIOS systems]] to install GRUB onto your disk. {{ic|grub-mkconfig}} does not properly detect the ZFS filesystem, so it is necessary to edit {{ic|grub.cfg}} manually:
 
+
Use EFISTUB and rEFInd for the UEFI boot loader. See [[Beginners' Guide#For UEFI motherboards]].  The kernel parameters in refind_linux.conf for zfs should include "zfs=bootfs", or "zfs=zroot", so the system can boot from ZFS.  The 'root' and 'rootfstype' parameters aren't needed.
+
 
+
===BIOS systems===
+
 
+
Follow the [[Grub2#BIOS_systems_2]] wiki. {{ic|grub-mkconfig}} fails for me, so I edited {{ic|grub.cfg}} manually.
+
  
 
{{hc|/boot/grub/grub.cfg|<nowiki>
 
{{hc|/boot/grub/grub.cfg|<nowiki>
Line 273: Line 210:
 
# (0) Arch Linux
 
# (0) Arch Linux
 
menuentry "Arch Linux" {
 
menuentry "Arch Linux" {
    set root=(hd0,1)
+
     linux /vmlinuz-linux zfs=zroot rw
     linux /vmlinuz-linux zfs=zroot
+
 
     initrd /initramfs-linux.img
 
     initrd /initramfs-linux.img
 
}
 
}
</nowiki>
+
</nowiki>}}
}}
+
  
==Unmount and restart==
+
if you did not create a separate /boot partition, kernel and initrd paths have to be in the following format:
  
This is it, we are done!
+
  /dataset/@/actual/path 
  
    # exit
+
Example with Arch installed on the main dataset (not recommended - this will not allow for boot environments):
    # umount /mnt/boot
+
    # zfs umount -a
+
    # zpool export zroot
+
    # reboot
+
  
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}
+
    linux /@/boot/vmlinuz-linux zfs=zroot rw
 +
    initrd /@/boot/initramfs-linux.img
  
==Troubleshooting==
+
Example with Arch installed on a separate dataset zroot/ROOT/default:
  
If the new installation does not boot because the zpool cannot be imported, you will need to chroot into the installation and properly export the zpool. See [[ZFS#Emergency chroot repair with archzfs]].
+
    linux /ROOT/default/@/boot/vmlinuz-linux zfs=zroot/ROOT/default rw
 +
    initrd /ROOT/default/@/boot/initramfs-linux.img
 +
 
 +
When you come to installing GRUB, you are likely to get an error like:
 +
 
 +
Failed to get canonical path of /dev/ata-yourdriveid-partx
 +
 
 +
Until this gets fixed, the easiest workaround is to create a symbolic link from the regular Linux device name of the partition to the device name GRUB is looking for:
 +
 
 +
# ln -s /dev/sdax /dev/ata-yourdriveid-partx
 +
 
 +
=== For UEFI motherboards ===
 +
 
 +
Use {{ic|EFISTUB}} and {{ic|rEFInd}} for the UEFI boot loader. The kernel parameters in {{ic|refind_linux.conf}} for ZFS should include {{ic|1=zfs=bootfs}} or {{ic|1=zfs=zroot}} so the system can boot from ZFS. The {{ic|root}} and {{ic|rootfstype}} parameters are not needed.
 +
 
 +
== Unmount and restart ==
 +
 
 +
We are almost done!
 +
# exit
 +
# umount /mnt/boot (if you have a legacy boot partition)
 +
# zfs umount -a
 +
# zpool export zroot
 +
Now reboot.
 +
 
 +
{{Warning|If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.}}
  
Once inside the chroot environment, load the ZFS module and force import the zpool,
+
== After the first boot ==
  
    # zpool import -a -f
+
If everything went fine up to this point, your system will boot. Once.
 +
For your system to be able to reboot without issues, you need to enable the {{ic|zfs.target}} to auto mount the pools and set the hostid.
  
now export the pool:
+
For each pool you want automatically mounted execute:
 +
# zpool set cachefile=/etc/zfs/zpool.cache <pool>
 +
Enable the target with [[systemd]]:
 +
# systemctl enable zfs.target
  
    # zpool export <pool>
+
When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the [[kernel parameters]] in your boot loader. For example, adding {{ic|<nowiki>spl.spl_hostid=0x00bab10c</nowiki>}}, to get your number use the {{ic|hostid}} command.
  
To see your available pools, use,
+
The other, and suggested, solution is to make sure that there is a hostid in {{ic|/etc/hostid}}, and then regenerate the initramfs image which will copy the hostid into the initramfs image. To write the hostid file safely you need to use a small C program:
  
    # zpool status
+
#include <stdio.h>
 +
#include <errno.h>
 +
#include <unistd.h>
 +
 +
int main() {
 +
    int res;
 +
    res = sethostid(gethostid());
 +
    if (res != 0) {
 +
        switch (errno) {
 +
            case EACCES:
 +
            fprintf(stderr, "Error! No permission to write the"
 +
                          " file used to store the host ID.\n"
 +
                          "Are you root?\n");
 +
            break;
 +
            case EPERM:
 +
            fprintf(stderr, "Error! The calling process's effective"
 +
                            " user or group ID is not the same as"
 +
                            " its corresponding real ID.\n");
 +
            break;
 +
            default:
 +
            fprintf(stderr, "Unknown error.\n");
 +
        }
 +
        return 1;
 +
    }
 +
    return 0;
 +
}
  
It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on your network setup. During the installation in the archiso your network configuration could be different generating a different hostid than the one contained in your new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See [http://osdir.com/ml/zfs-discuss/2011-06/msg00227.html Re: Howto zpool import/export automatically? - msg#00227].
+
Copy it, save it as {{ic|writehostid.c}} and compile it with {{ic|gcc -o writehostid writehostid.c}}, finally execute it and regenerate the initramfs image:
  
If ZFS complains about "pool may be in use" after every reboot, you should properly export pool as described above, and then rebuild ramdisk in normally booted system:
+
# ./writehostid
 +
# mkinitcpio -p linux
  
    # mkinitcpio -p linux
+
You can now delete the two files {{ic|writehostid.c}} and {{ic|writehostid}}. Your system should work and reboot properly now.
  
==See also==
+
== See also ==
  
 
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]
 
* [https://github.com/dajhorn/pkg-zfs/wiki/HOWTO-install-Ubuntu-to-a-Native-ZFS-Root-Filesystem HOWTO install Ubuntu to a Native ZFS Root]
* [http://lildude.co.uk/zfs-cheatsheet ZFS Cheatsheet]
+
* [http://lildude.co.uk/zfs-cheatsheet ZFS cheatsheet]
 +
* [http://www.funtoo.org/wiki/ZFS_Install_Guide Funtoo ZFS install guide]

Latest revision as of 10:10, 8 November 2016

This article details the steps required to install Arch Linux onto a ZFS root filesystem.

Installation

See ZFS#Installation for installing the ZFS packages. If installing Arch Linux onto ZFS from the archiso, it would be easier to use the archzfs repository.

Embedding archzfs into archiso

See ZFS article.

Arch ZFS installation scripts

Manually installing Arch using ZFS is quite an involved undertaking but thankfully there are scripts to simplify the process such as ALEZ and install-raidz.

Partition the destination drive

Review Partitioning for information on determining the partition table type to use for ZFS. ZFS supports GPT and MBR partition tables.

ZFS manages its own partitions, so only a basic partition table scheme is required. The partition that will contain the ZFS filesystem should be of the type bf00, or "Solaris Root".

When using GRUB as your bootloader with an MBR partition table there is no need for a BIOS boot partition. Drives larger than 2TB require a GPT partition table and you should use parted to create the partitions for GPT. BIOS/GPT and UEFI/GPT configurations require a small (1/2MB) BIOS boot partition to store the bootloader. If you are using a UEFI-only bootloader you should use GPT.

Depending upon your choice of bootloader you may or may not require an EFI partition. GRUB, when installed on a BIOS machine (or a UEFI machine booting in legacy mode) using either MBR or GPT doesn't require an EFI partition. Consult Boot loaders for more info.

Partition scheme

Here is an example of a basic partition scheme that could be employed for your ZFS root install on a BIOS/MBR installation using GRUB:

Part     Size   Type
----     ----   -------------------------
   1     XXXG   Solaris Root (bf00)

Using GRUB on a BIOS (or UEFI machine in legacy boot mode) machine but using a GPT partition table:

Part     Size   Type
----     ----   -------------------------
   1       2M   BIOS boot partition (ef02)
   2     XXXG   Solaris Root (bf00)

Another example, this time using a UEFI-specific bootloader (such as rEFInd) and GPT:

Part     Size   Type
----     ----   -------------------------
   1       2M   BIOS boot partition (ef02)
   2     100M   EFI boot partition (ef00)
   3     XXXG   Solaris Root (bf00)

ZFS does not support swap files. If you require a swap partition, see ZFS#Swap volume for creating a swap ZVOL.

Tip: Bootloaders with support for ZFS are described in #Install and configure the bootloader.
Warning: Several GRUB bugs (bug #42861, zfsonlinux/grub/issues/5) complicate installing it on ZFS partitions, see #Install and configure the bootloader for a workaround

Example parted commands

Here are some example commands to partition a drive for the second scenario above ie using BIOS/legacy boot mode with a GPT partition table and a (slighty more than) 1MB BIOS boot partition for GRUB:

# parted /dev/sdx
(parted)mklabel gpt
(parted)mkpart non-fs 0% 2
(parted)mkpart primary 2 100%
(parted)set 1 bios_grub on
(parted)set 2 boot on
(parted)quit

You can achieve the above in a single command like so:

parted --script /dev/sdx mklabel gpt mkpart non-fs 0% 2 mkpart primary 2 100% set 1 bios_grub on set 2 boot on

If you are creating an EFI partition then that should have the boot flag set instead of the root partition.

Format the destination disk

If you have opted for a boot partition as well as any other non-ZFS system partitions then format them. Do not do anything to the Solaris partition nor to the BIOS boot partition. ZFS will manage the first, and your bootloader the second.

Setup the ZFS filesystem

First, make sure the ZFS modules are loaded,

# modprobe zfs

Create the root zpool

# zpool create -f zroot /dev/disk/by-id/id-to-partition-partx
Warning:
  • Always use id names when working with ZFS, otherwise import errors will occur.
  • The zpool command will normally activate all features. See ZFS#GRUB-compatible pool creation when using GRUB.

Create your datasets

Instead of using conventional disk partitions, ZFS has the concept of datasets to manage your storage. Unlike disk partitions, datasets have no fixed size and allow for different attributes, such as compression, to be applied per dataset. Normal ZFS datasets are mounted automatically by ZFS whilst legacy datasets are required to be mounted using fstab or with the traditional mount command.

One of the most useful features of ZFS is boot environments. Boot environments allow you to create a bootable snapshot of your system that you can revert to at any time instantly by simply rebooting and booting from that boot environment. This can make doing system updates much safer and is also incredibly useful for developing and testing software. In order to be able to use beadm to manage boot environments your datasets must be configured properly. Key to this are that you split your data directories (such as /home) into datasets that are distinct from your system datasets and that you do not place data in the root of the pool as this cannot be moved afterwards.

You should always create a dataset for at least your root filesystem and in nearly all cases you will also want /home to be in a separate dataset. You may decide you want your logs to persist over boot environments. If you are a running any software that stores data outside of /home (such as is the case for database servers) you should structure your datasets so that the data directories of the software you want to run are separated out from the root dataset.

With these example commands, we will create a basic boot environment compatible configuration comprising of just root and /home datasets with lz4 compression to save space and improve IO performance:

# zfs create -o mountpoint=none zroot/data
# zfs create -o mountpoint=none zroot/ROOT
# zfs create -o compression=lz4 -o mountpoint=/ zroot/ROOT/default
# zfs create -o compression=lz4 -o mountpoint=/home zroot/data/home

Configure the root filesystem

If you have just created your zpool, it will be mounted in a dir at the root of your tree named after the pool (ie /zroot). If the following set commands fail, you may need to unmount any ZFS filesystems first:

# zfs umount -a

Now set the mount points of the datasets:

# zfs set mountpoint=/ zroot/ROOT/default
# zfs set mountpoint=legacy zroot/data/home
Note: /etc/fstab mounts occur before zfs mounts, so don't use zfs mountpoints on directories with subfolders configured to be mounted by /etc/fstab.

and put them in /etc/fstab

/etc/fstab
# <file system>        <dir>         <type>    <options>              <dump> <pass>
zroot/ROOT/default / zfs defaults,noatime 0 0
zroot/data/home /home zfs defaults,noatime 0 0

All legacy datasets must be listed in /etc/fstab or they will not be mounted at boot.

Set the bootfs property on the descendant root filesystem so the boot loader knows where to find the operating system.

# zpool set bootfs=zroot/ROOT/default zroot

Export the pool,

# zpool export zroot
Warning: Do not skip this, otherwise you will be required to use -f when importing your pools. This unloads the imported pool.
Note: This might fail if you added a swap partition. You need to turn it off with the swapoff command.

Finally, re-import the pool,

# zpool import -d /dev/disk/by-id -R /mnt zroot
Note: -d is not the actual device id, but the /dev/by-id directory containing the symbolic links.

If this command fails and you are asked to import your pool via its numeric ID, run zpool import to find out the ID of your pool then use a command such as: zpool import 9876543212345678910 -R /mnt zroot

If there is an error in this step, you can export the pool to redo the command. The ZFS filesystem is now ready to use.

Be sure to bring the zpool.cache file into your new system. This is required later for the ZFS daemon to start.

# cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache

if you do not have /etc/zfs/zpool.cache, create it:

# zpool set cachefile=/etc/zfs/zpool.cache zroot

Install and configure Arch Linux

Follow the following steps using the Installation guide. It will be noted where special consideration must be taken for ZFSonLinux.

  • First mount any legacy or non-ZFS boot or system partitions using the mount command.
  • Install the base system.
  • The procedure described in Installation guide#Fstab is usually overkill for ZFS. ZFS usually auto mounts its own partitions, so we do not need ZFS partitions in fstab file, unless the user made legacy datasets of system directories. To generate the fstab for filesystems, use:
# genfstab -U -p /mnt >> /mnt/etc/fstab
  • Edit the /etc/fstab:
Note:
  • If you chose to create legacy datasets for system directories, keep them in this fstab!
  • Comment out all non-legacy datasets apart from the root dataset, the swap file and the boot/EFI partition. It is a convention to replace the swap's uuid with /dev/zvol/zroot/swap.
  • You need to add the Arch ZFS repository to /etc/pacman.conf, sign its key and install zfs-linux (or zfs-linux-lts if you are running the LTS kernel) within the arch-chroot before you can update the ramdisk with ZFS support.
  • When creating the initial ramdisk, first edit /etc/mkinitcpio.conf and add zfs before filesystems. Also, move keyboard hook before zfs so you can type in console if something goes wrong. You may also remove fsck (if you are not using Ext3 or Ext4). Your HOOKS line should look something like this:
HOOKS="base udev autodetect modconf block keyboard zfs filesystems"

When using systemd in the initrd, you need to install mkinitcpio-sd-zfsAUR and add the sd-zfs hook after the systemd hook instead of the zfs hook. Keep in mind that this hook uses different kernel parameters than the default zfs hook, more information can be found at the project page.

Note:
  • If you are using a separate dataset for /usr and have followed the instructions below, you must make sure you have the usr hook enabled after zfs, or your system will not boot.

Install and configure the bootloader

For BIOS motherboards

Follow GRUB#BIOS systems to install GRUB onto your disk. grub-mkconfig does not properly detect the ZFS filesystem, so it is necessary to edit grub.cfg manually:

/boot/grub/grub.cfg
set timeout=2
set default=0

# (0) Arch Linux
menuentry "Arch Linux" {
    linux /vmlinuz-linux zfs=zroot rw
    initrd /initramfs-linux.img
}

if you did not create a separate /boot partition, kernel and initrd paths have to be in the following format:

 /dataset/@/actual/path  

Example with Arch installed on the main dataset (not recommended - this will not allow for boot environments):

   linux /@/boot/vmlinuz-linux zfs=zroot rw
   initrd /@/boot/initramfs-linux.img

Example with Arch installed on a separate dataset zroot/ROOT/default:

   linux /ROOT/default/@/boot/vmlinuz-linux zfs=zroot/ROOT/default rw 
   initrd /ROOT/default/@/boot/initramfs-linux.img

When you come to installing GRUB, you are likely to get an error like:

Failed to get canonical path of /dev/ata-yourdriveid-partx

Until this gets fixed, the easiest workaround is to create a symbolic link from the regular Linux device name of the partition to the device name GRUB is looking for:

# ln -s /dev/sdax /dev/ata-yourdriveid-partx

For UEFI motherboards

Use EFISTUB and rEFInd for the UEFI boot loader. The kernel parameters in refind_linux.conf for ZFS should include zfs=bootfs or zfs=zroot so the system can boot from ZFS. The root and rootfstype parameters are not needed.

Unmount and restart

We are almost done!

# exit
# umount /mnt/boot (if you have a legacy boot partition)
# zfs umount -a
# zpool export zroot

Now reboot.

Warning: If you do not properly export the zpool, the pool will refuse to import in the ramdisk environment and you will be stuck at the busybox terminal.

After the first boot

If everything went fine up to this point, your system will boot. Once. For your system to be able to reboot without issues, you need to enable the zfs.target to auto mount the pools and set the hostid.

For each pool you want automatically mounted execute:

# zpool set cachefile=/etc/zfs/zpool.cache <pool>

Enable the target with systemd:

# systemctl enable zfs.target

When running ZFS on root, the machine's hostid will not be available at the time of mounting the root filesystem. There are two solutions to this. You can either place your spl hostid in the kernel parameters in your boot loader. For example, adding spl.spl_hostid=0x00bab10c, to get your number use the hostid command.

The other, and suggested, solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image which will copy the hostid into the initramfs image. To write the hostid file safely you need to use a small C program:

#include <stdio.h>
#include <errno.h>
#include <unistd.h>

int main() {
    int res;
    res = sethostid(gethostid());
    if (res != 0) {
        switch (errno) {
            case EACCES:
            fprintf(stderr, "Error! No permission to write the"
                         " file used to store the host ID.\n"
                         "Are you root?\n");
            break;
            case EPERM:
            fprintf(stderr, "Error! The calling process's effective"
                            " user or group ID is not the same as"
                            " its corresponding real ID.\n");
            break;
            default:
            fprintf(stderr, "Unknown error.\n");
        }
        return 1;
    }
    return 0;
}

Copy it, save it as writehostid.c and compile it with gcc -o writehostid writehostid.c, finally execute it and regenerate the initramfs image:

# ./writehostid
# mkinitcpio -p linux

You can now delete the two files writehostid.c and writehostid. Your system should work and reboot properly now.

See also