ZFS

From ArchWiki
Revision as of 04:00, 18 August 2012 by Demizer (Talk | contribs) (Removed gpt partition recommendations.)

Jump to: navigation, search
Summary help replacing me
This page provides basic guidelines for installing the native ZFS Linux kernel module.
Related
Installing Arch Linux on ZFS
ZFS on FUSE

ZFS is an advanced filesystem created by Sun Microsystems (now owned by Oracle) and released for OpenSolaris in November 2005. Features of ZFS include: pooled storage (integrated volume management -- zpool), Copy-on-write, snapshots, data integrity verification and automatic repair (scrubbing), RAID-Z, and a maximum 16 Exabyte volume size. ZFS is licensed under the Common Development and Distribution License (CDDL).

Described as "The last word in filesystems" ZFS is stable, fast, secure, and future-proof. Being licensed under the GPL incompatible CDDL, it is not possible for ZFS to be distributed along with the Linux Kernel. This requirement, however, does not prevent a native Linux kernel module from being developed and distributed by a third party, as is the case with zfsonlinux.org (ZOL).

Installation

The ZFS kernel module is available in the AUR via zfsAUR.

Configuration

ZFS is considered a "zero administration" filesystem by its creators, therefore configuring ZFS is very straight forward. Configuration is done primarily with two commands, # zfs and # zpool.

mkinitramfs hook

If you are using ZFS on your root filesystem, then you will need to add the zfs hook to mkinitcpio.conf. If you are not using ZFS for your root filesystem, then you do not need to add the ZFS hook.

You will need to change your kernel parameters to include the dataset you want to boot. You can use zfs=bootfs to use the ZFS bootfs (set via zpool set bootfs=rpool/ROOT/arch rpool) or you can set the kernel parameters to zfs=<pool>/<dataset> to boot directly from a ZFS dataset.

To see all available options for the ZFS hook:

 $ mkinitcpio -H zfs

To use the mkinitcpio hook, you will need to add zfs to your HOOKS in /etc/mkinitcpio.conf:

/etc/mkinitcpio.conf
...
HOOKS="base udev autodetect pata scsi sata encrypt zfs filesystems"
...

It is important to place this after any hooks which are needed to prepare the drive before it is mounted. For example, if your ZFS volume is encrypted, then you will need to place encrypt before the zfs hook to unlock it first.

Recreate the ramdisk

 # mkinitcpio -p linux

Add zfs to DAEMONS list

For ZFS to live by its "zero administration" namesake, the zfs daemon must bee loaded at startup. A benefit to this is that it is not necessary to mount your zpool in /etc/fstab; the zfs daemon imports and mounts your zfs pool automatically.

/etc/rc.conf
...
DAEMONS=(... @syslog-ng zfs dbus ...)
...

And now start the daemon if it is not started already

 # rc.d start zfs

Create a storage pool

Use # parted --list to see a list of all available drives. It is not necessary to partition your drives before creating the zfs filesystem, this will be done automatically. However, if you feel the need to completely wipe your drive before creating the filesystem, this can be easily done with the dd command.

 # dd if=/dev/zero of=/dev/<device>

It should not have to be stated, but be careful with this command!

Once you have the list of drives, it is now time to get the id's of the drives you will be using. The zfs on linux developers recommend using device ids when creating ZFS storage pools of less than 10 devices. To find the id's for your device, simply

 $ ls -lah /dev/disk/by-id/

The ids should look similar to the following:

 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JKRR -> ../../sdc
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0JTM1 -> ../../sde
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KBP8 -> ../../sdd
 lrwxrwxrwx 1 root root  9 Aug 12 16:26 ata-ST3000DM001-9YN166_S1F0KDGY -> ../../sdb

Now finally, create the ZFS pool:

 # zpool create -m <mount> <pool> raidz <ids>

or as an example

 # zpool create -f -m /mnt/data bigdata raidz ata-ST3000DM001-9YN166_S1F0KDGY ata-ST3000DM001-9YN166_S1F0JKRR ata-ST3000DM001-9YN166_S1F0KBP8 ata-ST3000DM001-9YN166_S1F0JTM1
  • create: subcommand to create the pool.
  • pool: This is the name of the pool. Change it to whatever you like.
  • -m: The mount point of the pool. If this is not specified, than your pool will be mounted to /<pool>.
  • raidz: This is the type of virtual device that will be created from the pool of devices. Raidz is a special implementation of raid5. See Jeff Bonwick's Blog -- RAID-Z for more information about raidz.
  • ids: The names of the drives or partions that you want to include into your pool. Get it from /dev/disk/by-id.

If the command is successful, there will be no output. Using the $ mount command will show that you pool is mounted. Using # zpool status will show that your pool has been created.

# zpool status
  pool: bigdata
 state: ONLINE
 scan: none requested
config:

        NAME                                       STATE     READ WRITE CKSUM
        bigdata                                    ONLINE       0     0     0
          -0                                       ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KDGY-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JKRR-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0KBP8-part1  ONLINE       0     0     0
            ata-ST3000DM001-9YN166_S1F0JTM1-part1  ONLINE       0     0     0

errors: No known data errors

At this point it would be good to reboot your computer to make sure your ZFS pool is mounted at boot. It is best to deal with all errors A.S.A.P. before transfering your data.

Usage

To see all the commands available in ZFS, use

 $ man zfs

or

 $ man zpool

Scrub

ZFS pools should be scrubbed at least once a week. To scrub your pool

 # zpool scrub <pool>

To do automatic scrubbing once a week, set the following line in your root crontab

# crontab -e
...
30 19 * * 5 zpool scrub <pool>
...

Replace <pool> with the name of your ZFS storage pool.

Check zfs pool status

To print a nice table with statistics about your ZFS pool, including and read/write errors, use

 # zpool status -v

Destroy a storage pool

ZFS makes it easy to destroy a mounted storage pool, removing all metadata about the ZFS device. This command destroys any data contained in the pool.

 # zpool destroy <pool>

and now when checking the status

# zpool status
no pools available

To find the name of your pool, see #Check zfs pool status.

Troubleshooting

does not contain an EFI label

The following error will occur when attempting to create a zfs filesystem,

 /dev/disk/by-id/<id> does not contain an EFI label but it may contain partition

The way to overcome this is to use -f with the zfs create command.

No hostid found

An error that occurs at boot with the following lines appearing before initscript output:

 ZFS: No hostid found on kernel command line or /etc/hostid.

This warning occurs because the ZFS module does not have access to the spl hosted. There are two solutions, for this. You can either place your spl hostid in the kernel parameters in your boot loader. For example, adding spl_hostid=0x00bab10c.

The other solution is to make sure that there is a hostid in /etc/hostid, and then regenerate the initramfs image. Which will copy the hostid into the initramfs image.

 # mkinitcpio -p linux

Tips and tricks

See also