ZFS

From ArchWiki
Revision as of 12:46, 24 May 2014 by Theking2 (talk | contribs) (Create a storage pool)
Jump to navigation Jump to search

ZFS is an advanced filesystem created by Sun Microsystems (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), Copy-on-write, snapshots, data integrity verification and automatic repair (scrubbing), RAID-Z, a maximum 16 Exabyte file size, and a maximum 256 Zettabyte volume size. ZFS is licensed under the Common Development and Distribution License (CDDL).

Described as "The last word in filesystems" ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with zfsonlinux.org (ZOL).

ZOL is a project funded by the Lawrence Livermore National Laboratory to develop a native Linux kernel module for its massive storage requirements and super computers.

Installation

Install zfs-gitAUR from the Arch User Repository or the demz-repo-core repository. This package has zfs-utils-gitAUR and spl-gitAUR as a dependency, which in turn has spl-utils-gitAUR as dependency. SPL (Solaris Porting Layer) is a Linux Kernel module implementing Solaris APIs for ZFS compatibility.

Note: The zfs-git package replaces the original zfs package from AUR. ZFSonLinux.org is slow to make stable releases and kernel API changes broke stable builds of ZFSonLinux for Arch. Changes submitted to the master branch of the ZFSonLinux repository are regression tested and therefore considered stable.

For users that desire ZFS builds from stable releases, zfs-ltsAUR is available from the Arch User Repository or the demz-repo-core repository.

Warning: The ZFS and SPL kernel modules are tied to a specific kernel version. It would not be possible to apply any kernel updates until updated packages are uploaded to AUR or the demz-repo-core repository.

Test the installation by issuing zpool status on the command line. If an "insmod" error is produced, try depmod -a.

Archiso

For installing Arch Linux into a ZFS root filesystem, install zfs-gitAUR from the Arch User Repository or the demz-repo-archiso repository.

See Installing Arch Linux on ZFS for more information.

Automated build script

The build order of the above is important due to nested dependencies. One can automate the entire process, including downloading the packages with the following shell script. The only requirements for it to work are:

  • sudo - Note that your user needed sudo rights to /usr/bin/clean-chroot-manager for the script below to work.
  • rsync - Needed for moving over the build files.
  • cowerAUR - Needed to grab sources from the AUR.
  • clean-chroot-managerAUR - Needed to build in a clean chroot and add packages to a local repo.

Be sure to add the local repo to /etc/pacman.conf like so:

$ tail /etc/pacman.conf
[chroot_local]
SigLevel = Optional TrustAll
Server = file:///path/to/localrepo/defined/below
~/bin/build_zfs
#!/bin/bash
#
# ZFS Builder by graysky
#

# define the temp space for building here
WORK='/scratch'

# create this dir and chown it to your user
# this is the local repo which will store your zfs packages
REPO='/var/repo'

# Add the following entry to /etc/pacman.conf for the local repo
#[chroot_local]
#SigLevel = Optional TrustAll
#Server = file:///path/to/localrepo/defined/above

for i in rsync cower clean-chroot-manager; do
  command -v $i >/dev/null 2>&1 || {
  echo "I require $i but it's not installed. Aborting." >&2
  exit 1; }
done

[[ -f ~/.config/clean-chroot-manager.conf ]] &&
  . ~/.config/clean-chroot-manager.conf || exit 1

[[ ! -d "$REPO" ]] &&
  echo "Make the dir for your local repo and chown it: $REPO" && exit 1

[[ ! -d "$WORK" ]] &&
  echo "Make a work directory: $WORK" && exit 1

cd "$WORK"
for i in spl-utils-git spl-git zfs-utils-git zfs-git; do
  [[ -d $i ]] && rm -rf $i
  cower -d $i
done

for i in spl-utils-git spl-git zfs-utils-git zfs-git; do
  cd "$WORK/$i"
  sudo ccm s
done

rsync -auvxP "$CHROOTPATH/root/repo/" "$REPO"

Playing with ZFS

Users wishing to experiment with ZFS on virtual block devices (known in ZFS terms as VDEVs) which can be simple files like ~/zfs0.img ~/zfs1.img ~/zfs2.img etc. with no possibility of real data loss are encouraged to see the Playing_with_ZFS article. Common tasks like building a RAIDZ array, purposefully corrupting data and recovering it, snapshotting datasets, etc. are covered.

Configuration

ZFS is considered a "zero administration" filesystem by its creators; therefore, configuring ZFS is very straight forward. Configuration is done primarily with two commands: zfs and zpool.

Note: The following section is ONLY needed if users wish to install their root filesystem to a ZFS volume. Users wishing to have a data partition with ZFS do NOT need to read the next section.

Automatic Start

For ZFS to live by its "zero administration" namesake, the zfs daemon must be loaded at startup. A benefit to this is that it is not necessary to mount the zpool in /etc/fstab; the zfs daemon can import and mount zfs pools automatically. The daemon mounts the zfs pools reading the file /etc/zfs/zpool.cache.

For each pool you want automatically mounted by the zfs daemon execute:

# zpool set cachefile=/etc/zfs/zpool.cache <pool>

Systemd

Enable the service so it is automatically started at boot time:

 # systemctl enable zfs.target

To manually start the daemon:

 # systemctl start zfs.target

Create a storage pool

Use # parted --list to see a list of all available drives. It is not necessary nor recommended to partition the drives before creating the zfs filesystem.

Note: If some or all device have been used in a software RAID set it is paramount to erase any old RAID configuration information. (https://wiki.archlinux.org/index.php/Mdadm#Prepare_the_Devices)

Having identified the list of drives, it is now time to get the id's of the drives to add to the zpool. The zfs on Linux developers recommend using device ids when creating ZFS storage pools of less than 10 devices. To find the id's, simply:

 $ ls -lah /dev/disk/by-id/

The ids should look similar to the following:

 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb

Now, finally, create the ZFS pool:

 # zpool create -f -m <mount> <pool> raidz <ids>
  • create: subcommand to create the pool.
  • -m: The mount point of the pool. If this is not specified, than the pool will be mounted to /<pool>.
  • pool: This is the name of the pool.
  • raidz: This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See Jeff Bonwick's Blog -- RAID-Z for more information about raidz.
  • ids: The names of the drives or partitions that to include into the pool. Get it from /dev/disk/by-id.

Here is an example for the full command:

 # zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1

In case Advanced Format disks are used which have a native sector size of 4096 bytes instead of 512 bytes, the automated sector size detection algorithm of ZFS might detect 512 bytes because the backwards compatibility with legacy systems. This would result in degraded performance. To make sure a correct sector size is used, the ashift=12 option should be used (See the ZFS on Linux FAQ). The full command would in this case be:

 # zpool create -f -o ashift=12 -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1

If the command is successful, there will be no output. Using the $ mount command will show that the pool is mounted. Using # zpool status will show that the pool has been created.

# zpool status
  pool: bigdata
 state: ONLINE
 scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        bigdata                                    ONLINE       0     0     0
          -0                                       ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KDGY-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JKRR-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KBP8-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JTM1-part1  ONLINE       0     0     0

errors: No known data errors

At this point it would be good to reboot the machine to ensure that the ZFS pool is mounted at boot. It is best to deal with all errors before transferring data.

Tuning

General

Many parameters are available for zfs file systems, you can view a full list with zfs get all <pool>. Two common ones to adjust are atime and compression.

Atime is enabled by default but for most users, it represents superfluous writes to the zpool and it can be disabled using the zfs command:

# zfs set atime=off <pool>

As an alternative to turning off atime completely, relatime is available for ZFSonLinux HEAD snapshots (which is normally installed on non-LTS kernels), to be released as a new feature of 0.6.3. This brings the default ext4/xfs atime semantics to ZFS, where access time is only updated if the modified time or changed time changes, or if the existing access time hasn't been updated within the past 24 hours. It is a compromise between atime=off and atime=on. This property only takes effect if atime is on:

# zfs set relatime=on <pool>

Compression is just that, transparent compression of data. Consult the man page for various options. A recent advancement is the lz4 algorithm which offers excellent compression and performance. Enable it (or any other) using the zfs command:

# zfs set compression=lz4 <pool>

Other options for zfs can be displayed again, using the zfs command:

# sudo zfs get all <pool>

Database

ZFS, unlike most other file systems, has a variable record size, or what is commonly referred to as a block size. By default, the recordsize on ZFS is 128KiB, which means it will dynamically allocate blocks of any size from 512B to 128KiB depending on the size of file being written. This can often help fragmentation and file access, at the cost that ZFS would have to allocate new 128KiB blocks each time only a few bytes are written to.

Most RDBMSes work in 8KiB-sized blocks by default. Although the block size is tunable for MySQL/MariaDB, PostgreSQL, and Oracle, all three of them use an 8KiB block size by default. For both performance concerns and keeping snapshot differences to a minimum (for backup purposes, this is helpful), it is usually desirable to tune ZFS instead to accommodate the databases, using a command such as:

# zfs set recordsize=8K <pool>/postgres

These RDBMSes also tend to implement their own caching algorithm, often similar to ZFS's own ARC. In the interest of saving memory, it is best to simply disable ZFS's caching of the database's file data and let the database do its own job:

# zfs set primarycache=metadata <pool>/postgres

If your pool has no configured log devices, ZFS reserves space on the pool's data disks for its intent log (the ZIL). ZFS uses this for crash recovery, but databases are often syncing their data files to the file system on their own transaction commits anyway. The end result of this, is that ZFS will be committing data twice to the data disks and it can severely impact performance. You can tell ZFS to not use the ZIL, and in which case data is only committed to the file system once. Disabling the ZIL for non-database file systems or for pools with configured log devices (eg, with SSDs) can actually negatively impact the performance, so beware:

# zfs set logbias=throughput <pool>/postgres

These can also be done at file system creation time, for example:

# zfs create -o recordsize=8K \
             -o primarycache=metadata \
             -o mountpoint=/var/lib/postgres \
             -o logbias=throughput \
              <pool>/postgres

Please note: these kinds of tuning parameters are ideal for specialized applications like RDBMSes. You can easily hurt ZFS's performance by setting these on a general-purpose file system such as your /home directory.

/tmp

If you plan on using ZFS to store your /tmp directory (which may be useful for storing arbitrarily-large sets of files, or simply keeping your RAM free of idle data), you can generally improve performance of certain applications writing to /tmp by disabling file system sync. This causes ZFS to ignore an application's sync requests (eg, with fsync or O_SYNC) and return immediately. While this has severe data consistency consequences (never disable sync for a database!), files in /tmp are less likely to be important and affected:

# zfs set sync=disabled <pool>/tmp

Additionally, for security purposes, you may want to disable setuid and devices on the /tmp file system, which prevents any privilege-escalation attacks or the use of device nodes:

# zfs set setuid=off <pool>/tmp
# zfs set devices=off <pool>/tmp

Combining all of these for a create command would be as follows:

# zfs create -o setuid=off -o devices=off -o sync=disabled -o mountpoint=/tmp <pool>/tmp

Please note, also, that if you want /tmp on ZFS, you will need to mask (disable) systemd's automatic tmpfs-backed /tmp, else ZFS will be unable to mount your dataset at boot-time or import-time:

# systemctl mask tmp.mount

zvols

zvols might suffer from the same block size-related issues as RDBMSes, but it's worth noting that the default recordsize for zvols is 8KiB already. If possible, it is best to align any partitions contained in a zvol to your recordsize (current versions of fdisk and gdisk by default automatically align at 1MiB segments, which works), and file system block sizes to the same size. Other than this, you might tweak the recordsize to accommodate the data inside the zvol as necessary (though 8KiB tends to be a good value for most file systems, even when using 4KiB blocks on that level).

Usage

Users can optionally create a dataset under the zpool as opposed to manually creating directories under the zpool. Datasets allow for an increased level of control (quotas for example) in addition to snapshots. To be able to create and mount a dataset, a directory of the same name must not pre-exist in the zpool. To create a dataset, use:

 # zfs create <nameofzpool>/<nameofdataset>

It is then possible to apply ZFS specific attributes to the dataset. For example, one could assign a quota limit to a specific directory within a dataset:

 # zfs set quota=20G <nameofzpool>/<nameofdataset>/<directory>

To see all the commands available in ZFS, use :

 $ man zfs

or:

 $ man zpool

Scrub

ZFS pools should be scrubbed at least once a week. To scrub the pool:

 # zpool scrub <pool>

To do automatic scrubbing once a week, set the following line in the root crontab:

# crontab -e
...
30 19 * * 5 zpool scrub <pool>
...

Replace <pool> with the name of the ZFS pool.

Check zfs pool status

To print a nice table with statistics about the ZFS pool, including and read/write errors, use

 # zpool status -v

Destroy a storage pool

ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool:

 # zpool destroy <pool>

And now when checking the status:

# zpool status
no pools available

To find the name of the pool, see #Check zfs pool status.

Export a storage pool

If a storage pool is to be used on another system, it will first need to be exported. It is also necessary to export a pool if it has been imported from the archiso as the hostid is different in the archiso as it is in the booted system. The zpool command will refuse to import any storage pools that have not been exported. It is possible to force the import with the -f argument, but this is considered bad form.

Any attempts made to import an un-exported storage pool will result in an error stating the storage pool is in use by another system. This error can be produced at boot time abruptly abandoning the system in the busybox console and requiring an archiso to do an emergency repair by either exporting the pool, or adding the zfs_force=1 to the kernel boot parameters (which is not ideal). See #On boot the zfs pool does not mount stating: "pool may be in use from other system"

To export a pool,

 # zpool export bigdata

Rename a Zpool

Renaming a zpool that is already created is accomplished in 2 steps:

# zpool export oldname
# zpool import oldname newname

Setting a Different Mount Point

The mount point for a given zpool can be moved at will with one command:

# zfs set mountpoint=/foo/bar poolname

Swap volume

ZFS does not allow to use swapfiles, but users can use a ZFS volume (ZVOL) as swap. It is importart to set the ZVOL block size to match the system page size, which can be obtained by the getconf PAGESIZE command (default on x86_64 is 4KiB). Other options useful for keeping the system running well in low-memory situations are keeping it always synced and not caching the zvol data.

Create a 8GiB zfs volume:

# zfs create -V 8G -b $(getconf PAGESIZE) \
              -o primarycache=metadata \
              -o sync=always \
              -o com.sun:auto-snapshot=false <pool>/swap

Prepare it as swap partition:

# mkswap -f /dev/zvol/<pool>/swap
# swapon /dev/zvol/zroot/swap

To make it permanent, edit /etc/fstab. ZVOLs support discard, which can potentially help ZFS's block allocator and reduce fragmentation for all other datasets when/if swap is not full.

Add a line to /etc/fstab:

 /dev/zvol/<pool>/swap none swap discard 0 0

Keep in mind the Hibernate hook must be loaded before filesystems, so using ZVOL as swap will not allow to use hibernate function. If you need hibernate, keep a partition for it.

Make sure to unmount all ZFS filesystems before rebooting the machine, otherwise any ZFS pools will refuse to be imported:

# zfs umount -a

Automatic snapshots

ZFS Automatic Snapshot Service for Linux

The zfs-auto-snapshot-gitAUR package from AUR provides a shell script to automate the management of snapshots, with each named by date and label (hourly, daily, etc), giving quick and convenient snapshotting of all ZFS datasets. The package also installs cron tasks for quarter-hourly, hourly, daily, weekly, and monthly snapshots. Optionally adjust the --keep parameter from the defaults depending on how far back the snapshots are to go (the monthly script by default keeps data for up to a year).

To prevent a dataset from being snapshotted at all, set com.sun:auto-snapshot=false on it. Likewise, set more fine-grained control as well by label, if, for example, no monthlies are to be kept on a snapshot, for example, set com.sun:auto-snapshot:monthly=false.

ZFS Snapshot Manager

The zfs-snap-managerAUR package from AUR provides a python service that takes daily snapshots from a configurable set of ZFS datasets and cleans them out in a "Grandfather-father-son" scheme. It can be configured to e.g. keep 7 daily, 5 weekly, 3 monthly and 2 yearly snapshots.

The package also supports configurable replication to other machines running ZFS by means of zfs send and zfs receive. If the destination machine runs this package as well, it could be configured to keep these replicated snapshots for a longer time. This allows a setup where a source machine has only a few daily snapshots locally stored, while on a remote storage server a much longer retention is available.

Troubleshooting

ZFS is using too much RAM

By default, ZFS caches file operations (ARC) using up to two-thirds of available system memory on the host. Remember, ZFS is designed for enterprise class storage systems! To adjust the ARC size, add the following to the Kernel parameters list:

  zfs.zfs_arc_max=536870912 # (for 512MB)

For a more detailed description, as well as other configuration options, see gentoo-wiki:zfs#arc.

Does not contain an EFI label

The following error will occur when attempting to create a zfs filesystem,

 /dev/disk/by-id/<id> does not contain an EFI label but it may contain partition

The way to overcome this is to use -f with the zfs create command.

No hostid found

An error that occurs at boot with the following lines appearing before initscript output:

 ZFS: No hostid found on kernel command line or /etc/hostid.

This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. Either place the spl hostid in the kernel parameters in the boot loader. For example, adding spl.spl_hostid=0x00bab10c.

The other solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.

 # mkinitcpio -p linux

On boot the zfs pool does not mount stating: "pool may be in use from other system"

Unexported pool

If the new installation does not boot because the zpool cannot be imported, chroot into the installation and properly export the zpool. See ZFS#Emergency chroot repair with archzfs.

Once inside the chroot environment, load the ZFS module and force import the zpool,

# zpool import -a -f

now export the pool:

# zpool export <pool>

To see the available pools, use,

# zpool status

It is necessary to export a pool because of the way ZFS uses the hostid to track the system the zpool was created on. The hostid is generated partly based on the network setup. During the installation in the archiso the network configuration could be different generating a different hostid than the one contained in the new installation. Once the zfs filesystem is exported and then re-imported in the new installation, the hostid is reset. See Re: Howto zpool import/export automatically? - msg#00227.

If ZFS complains about "pool may be in use" after every reboot, properly export pool as described above, and then rebuild ramdisk in normally booted system:

# mkinitcpio -p linux

Incorrect hostid

Double check that the pool is properly exported. Exporting the zpool clears the hostid marking the ownership. So during the first boot the zpool should mount correctly. If it does not there is some other problem.

Reboot again, if the zfs pool refuses to mount it means the hostid is not yet correctly set in the early boot phase and it confuses zfs. Manually tell zfs the correct number, once the hostid is coherent across the reboots the zpool will mount correctly.

Boot using zfs_force and write down the hostid. This one is just an example.

% hostid
0a0af0f8

This number have to be added to the kernel parameters as spl.spl_hostid=0a0af0f8. Another solution is writing the hostid inside the initram image, see the installation guide explanation about this.

Users can always ignore the check adding zfs_force=1 in the kernel parameters, but it is not advisable as a permanent solution.

Tips and tricks

Embed the archzfs packages into an archiso

It is a good idea make an installation media with the needed software included. Otherwise, the latest archiso installation media burned to a CD or a USB key is required.

To embed zfs in the archiso, from an existing install, download the archiso package.

# pacman -S archiso

Start the process:

# cp -r /usr/share/archiso/configs/releng /root/media

Edit the packages.x86_64 file adding those lines:

spl-utils-git
spl-git
zfs-utils-git
zfs-git

Edit the pacman.conf file adding those lines (TODO, correctly embed keys in the installation media?):

[demz-repo-archiso]
SigLevel = Never
Server = http://demizerone.com/$repo/$arch

Add other packages in packages.both, packages.i686, or packages.x86_64 if needed and create the image.

# ./build.sh -v

The image will be in the /root/media/out directory.

More informations about the process can be read in this guide or in the Archiso article.

If installing onto a UEFI system, see Unified Extensible Firmware Interface#Create UEFI bootable USB from ISO for creating UEFI compatible installation media.

Encryption in ZFS on linux

ZFS on linux does not support encryption directly, but zpools can be created in dm-crypt block devices. Since the zpool is created on the plain-text abstraction it is possible to have the data encrypted while having all the advantages of ZFS like deduplication, compression, and data robustness.

dm-crypt, possibly via LUKS, creates devices in /dev/mapper and their name is fixed. So you just need to change zpool create commands to point to that names. The idea is configuring the system to create the /dev/mapper block devices and import the zpools from there. Since zpools can be created in multiple devices (raid, mirroring, striping, ...), it is important all the devices are encrypted otherwise the protection might be partially lost.


For example, an encrypted zpool can be created using plain dm-crypt (without LUKS) with:

# cryptsetup --hash=sha512 --cipher=twofish-xts-plain64 --offset=0 --key-file=/dev/sdZ --key-size=512 open --type=plain /dev/sdX enc
# zpool create zroot /dev/mapper/enc

In the case of a root filesystem pool, the mkinicpio.conf HOOKS line will enable the keyboard for the password, create the devices, and load the pools. It will contain something like:

HOOKS="... keyboard encrypt zfs ..."

Since the /dev/mapper/enc name is fixed no import errors will occur.

Creating encrypted zpools works fine. But if you need encrypted directories, for example to protect your users' homes, ZFS loses some functionality.

ZFS will see the encrypted data, not the plain-text abstraction, so compression and deduplication will not work. The reason is that encrypted data has always high entropy making compression ineffective and even from the same input you get different output (thanks to salting) making deduplication impossible. To reduce the unnecessary overhead it is possible to create a sub-filesystem for each encrypted directory and use eCryptfs on it.

For example to have an encrypted home: (the two passwords, encryption and login, must be the same)

# zfs create -o compression=off \
             -o dedup=off \
             -o mountpoint=/home/<username> \
             <zpool>/<username>
# useradd -m <username>
# passwd <username>
# ecryptfs-migrate-home -u <username>
<log in user and complete the procedure with ecryptfs-unwrap-passphrase>

Emergency chroot repair with archzfs

Here is how to use the archiso to get into the ZFS filesystem for maintenance.

Boot the latest archiso and bring up the network:

   # wifi-menu
   # ip link set eth0 up

Test the network connection:

   # ping google.com

Sync the pacman package database:

   # pacman -Syy

(optional) Install a text editor:

   # pacman -S vim

Add archzfs archiso repository to pacman.conf:

/etc/pacman.conf
[demz-repo-archiso]
Server = http://demizerone.com/$repo/$arch

Sync the pacman package database:

   # pacman -Syy

Add the archzfs maintainer's PGP key to the local (installer image) trust:

   # pacman-key --lsign-key 0EE7A126

Install the ZFS package group:

   # pacman -S archzfs-git

Load the ZFS kernel modules:

   # modprobe zfs

Import the pool:

   # zpool import -a -R /mnt

Mount the boot partitions (if any):

   # mount /dev/sda2 /mnt/boot
   # mount /dev/sda1 /mnt/boot/efi

Chroot into the ZFS filesystem:

   # arch-chroot /mnt /bin/bash

Check the kernel version:

   # pacman -Qi linux
   # uname -r

uname will show the kernel version of the archiso. If they are different, run depmod (in the chroot) with the correct kernel version of the chroot installation:

   # depmod -a 3.6.9-1-ARCH (version gathered from pacman -Qi linux)

This will load the correct kernel modules for the kernel version installed in the chroot installation.

Regenerate the ramdisk:

   # mkinitcpio -p linux

There should be no errors.

See also

Aaron Toponce has authored a 17-part blog on ZFS which is an excellent read.
  1. VDEVs
  2. RAIDZ Levels
  3. The ZFS Intent Log
  4. The ARC
  5. Import/export zpools
  6. Scrub and Resilver
  7. Zpool Properties
  8. Zpool Best Practices
  9. Copy on Write
  10. Creating Filesystems
  11. Compression and Deduplication
  12. Snapshots and Clones
  13. Send/receive Filesystems
  14. ZVOLs
  15. iSCSI, NFS, and Samba
  16. Get/Set Properties
  17. ZFS Best Practices