Talk:ZFS

From ArchWiki
Latest comment: 5 March by Kinifwyne in topic Unlock at login time: PAM may be incorrect

Bindmount

Where does this file go and what other steps are required?

I would expect: /etc/systemd/system/

Then: systemctl enable srv-nfs4-media.mount

Msalerno (talk) 02:36, 22 October 2015 (UTC)Reply[reply]

resume hook

In think in the page is a typo, the page should state resume hook instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? Ezzetabi (talk) 09:49, 18 August 2015 (UTC)Reply[reply]

Automatic build script

I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. Severach (talk) 10:07, 9 August 2015 (UTC)Reply[reply]

I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- Alad (talk) 12:46, 9 August 2015 (UTC)Reply[reply]
...or an anonymous gist if you don't have nor want to create a GitHub account. — Kynikos (talk) 08:40, 10 August 2015 (UTC)Reply[reply]
Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. Das j (talk) 20:01, 10 January 2016 (UTC)Reply[reply]

Automatic snapshots

zfs-auto-snapshot-gitAUR seems to have disappeared from the AUR. I haven't been able to find any information on why it was deleted; does anyone know? In any case, it should probably be removed from this page. warai otoko (talk) 03:21, 2 September 2015 (UTC)Reply[reply]

On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. Warai otoko (talk) 04:43, 2 September 2015 (UTC)Reply[reply]
I've recreated it. I use this script as well. --Chungy (talk) 02:49, 3 September 2015 (UTC)Reply[reply]

Configuration

The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command

# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)

Z3ntu (talk) 15:21, 27 October 2016 (UTC)Reply[reply]


@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package

Justin8 (talk) 22:04, 27 October 2016 (UTC)Reply[reply]

@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61

Z3ntu (talk) 08:34, 28 October 2016 (UTC)Reply[reply]


There seems to be a new systemd target zfs-import.target which must be enabled in order to auto-mount? Otherwise zfs-mount.service will be executed before zfs-import-cache.service on my machine and nothing will be mounted. --Swordfeng (talk) 12:55, 8 November 2017 (UTC)Reply[reply]

I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself. starfry (talk) 16:07, 31 May 2018 (UTC)Reply[reply]

I’ve set up ZFS recently and the systemctl enable commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is systemctl preset […] a “required command line?” —Auerhuhn (talk) 16:33, 31 May 2018 (UTC)Reply[reply]

That's why I never deleted anything from the page. I found that the systemctl enable commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me). starfry (talk) 16:07, 31 May 2018 (UTC)Reply[reply]

Scrub

The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.

There is a good blog from Oracle about when and why (or not), to scrub: https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2

I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this? Mouseman — (talk) 13:15, 21 October 2018 (UTC)Reply[reply]

I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — Auerhuhn (talk) 13:51, 21 October 2018 (UTC)Reply[reply]
Thanks for the reply. I agree, although I was thinking to include the 'Should I do this' too. I'll let this sit here for a few days and see what else turns up and edit the page next week or weekend. — Mouseman (talk) 17:38, 21 October 2018 (UTC)Reply[reply]
I was curious when I saw the factual accuracy banner. I've been reading Aaron Toponce's guide to ZFS administration which is an extremely thorough walkthrough. In his chapter on scrubbing and resilvering he lists two heuristics. He suggests, "[t]he recommended frequency at which you should scrub the data depends on the quality of the underlying disks. If you have SAS or FC disks, then once per month should be sufficient. If you have consumer grade SATA or SCSI, you should do once per week." That might be the source of the suggestion? I'd love to hear more from people who have more experience with ZFS. --Metatinara (talk) 04:43, 23 November 2018 (UTC)Reply[reply]
Your reply reminded me that I wanted to edit the page as discussed above. I agree that guide is very good, it has helped me greatly when I got started with ZFS. But again, I have to challenge the advise. On what basis should consumer grade harddisks be scrubbed once a week? As far as I am concerned, there is no evidence, no data to support such a claim. How likely is bitrot to occur due to degradation or solar flares? EMP? How many bits can flip before data becomes irrepairable? If we have those numbers from different vendors in different situations, we can actually make an educated guess at how often scrubs should take place. I don't know of any such data or research. I know I am only one guy with limited experience but here it is: I have been using ZFS for about 6 years in three different configurations, all consumer or prosumer hardware. Before that, I used parchive and later par2 for I don't know, 20 odd years, to create 10% parity sets on important live data and offline backups, so that I could repair corruption. I would stash away old harddisks as backups like this. In my time, I had to use par2 only once because a hard drive went bad and ran out of realocated sectors. And it wasn't even a old disk, it was still in warranty. Not once did a scrub actually have to repair something. Not once did I ever find evidence of bitrot. Doesn't mean it doesn't exist because I know it does, but based on my own experience, I think it is extremely unlikely to occur and when it does, ZFS can fix it unless it's too much; but how long does that take? So based on my own experience, I am running it once every few months and I'll likely decrease the frequency to once every 6 months or so.Mouseman (talk)
Those are some great thoughts and anecdotes; thank you for sharing them! I think the way you went about your edit is good. Giving some resources to help you make the decision seems like a better approach than "do this without any justification." I appreciate the direct approach that most of the Arch Wiki articles take, but in this case, it seems like more information and less prescription is the better approach. — Metatinara (talk) 17:36, 24 November 2018 (UTC)Reply[reply]

xattr=sa under tuning?

I've seen a lot of people setting xattr=sa which disables the creation of hidden subdirectories for storing extended attributes and stores them directly in inodes instead. This has performance advantages and makes the output of snapshot diffs cleaner. Should we add it to the Tuning section? Hinzundcode (talk) 22:12, 20 March 2019 (UTC)Reply[reply]

Why use partition labels?

The page mentions creating a pool using partition labels, why? It shouldn't be recommended/needed to create partitions as ZFS will create/overwrite the partition table. Should this section be removed? Francoism (talk) 10:35, 31 March 2019 (UTC)Reply[reply]

Unlock encrypted home at boot time

There is an example /etc/systemd/system/zfskey-tank@.service but systemd can't deal with services with slashes in them:

systemctl enable zfskey-zroot@zroot/data/home
Failed to enable unit: File zfskey-zroot@zroot/data/home: Invalid argument

There is a second example /etc/systemd/system/zfs-load-key.service that has

ExecStart=/usr/bin/bash -c '/usr/bin/zfs load-key -a'

That also does not work in the case that a password is needed: it doesn't prompt, it just fails at startup. So I seem to be stuck with having one password for "all" encrypted datasets (ok since there's only one) and explicitly prompting for it. At least I got that working on one system recently, but forgot the details, and now it's down with a hardware failure and I'm trying to solve it again on another system.

I would like to have a way to log in as a particular user and then unlock that user's home directory (which should be a separate dataset) with the same password. From the command line as part of the normal login process. Any ideas?

—This unsigned comment is by Ecloud (talk) 07:26, 13 December 2019 (UTC). Please sign your posts with ~~~~!Reply[reply]

Please sign your comments. You should use dashes instead, zfskey-tank@my-dataset.service. If you want to load all keys just use zfs-load-key.service. Make sure the datasets are mounted at boot.
Francoism (talk) 12:54, 14 December 2019 (UTC)Reply[reply]

Actual implication?

Note: This situation sometimes locks down the normal rolling update process by unsatisfied dependencies because the new kernel version, proposed by update, is unsupported by ZFSonLinux.

What does this actually mean in practice? Does it mean that updates block for everyone using Arch? Or that updates for zfs users are blocked? Or that updates for zfs users go ahead but things break, and if so, what? Or something else? Beepboo (talk) 10:39, 22 March 2020 (UTC)Reply[reply]

So it appears with the dkms method at least, upgrades of package linux always succeeds, assuming non-root filesystems, but zfs modules won't load until the dkms package supports the version of kernel. This means that you'll boot, but zfs list etc will fail. Beepboo (talk)

IgnorePkg?

  Tip: Add an IgnorePkg entry to pacman.conf to prevent these packages from upgrading when doing a regular update.

Why would this be useful? I.e. why wouldn't we want these packages upgrading?

Clicking on IgnorePkg says that adding entries isn't a good idea. If this is set, should it be set forever or just for a while?Beepboo (talk) 12:40, 22 March 2020 (UTC)Reply[reply]

Too much memory section

I think the entry on too much memory could do with being revised. E.g. should the entries (min/max) really be added as kernel parameters (e.g. grub) or via options entries in /etc/modprobe.d?

At least for the dkms version, mine are set via a file I created in /etc/modprobe.d Beepboo (talk) 11:56, 27 March 2020 (UTC)Reply[reply]

aclinherit=passthrough?

Is it really beneficial to set aclinherit=passthrough on top of acltype=posixacl and xattr=sa?

Here's the explanation of the attribute from the latest ZFS (0.8.3) manual:

aclinherit=discard|noallow|restricted|passthrough|passthrough-x
  Controls how ACEs are inherited when files and directories are created.
  discard        does not inherit any ACEs.
  noallow        only inherits inheritable ACEs that specify "deny" permissions.
  restricted     default, removes the write_acl and write_owner permissions when the ACE is inherited.
  passthrough    inherits all inheritable ACEs without any modifications.
  passthrough-x  same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit.

  When the property value is set to passthrough, files are created with a mode determined by the inheritable ACEs.  If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application.

  The aclinherit property does not apply to POSIX ACLs.

So the difference between restricted and passthrough is that the latter preserves write_acl and write_owner, which seem to be Solaris standards. And it explicitly says 'The aclinherit property does not apply to POSIX ACLs'.

FrederickZh (talk) 06:44, 2 May 2020 (UTC)Reply[reply]

zfs_vdev_scheduler deprecation?

It seems that the effect of this kernel module option was deprecated and then removed by this PR: https://github.com/openzfs/zfs/pull/9609/files

From the documentation/comment bit (which has been also removed by that commit) I assume that now the kernel module respects block device level settings for system itself.

Thaewrapt (talk) 11:22, 19 September 2020 (UTC)Reply[reply]

Setting options for sharenfs

I'm not sure where would be an appropriate place to add this information, but I struggled for a little while trying to find out how I could pass additional options to export since we don't use /etc/exports. After looking at the manpage [1] I figured out that any options set in the sharenfs property would be used - I think this alone would be useful to put in, even as a blurb.

Also, it looks like there's a bug in the sharenfs handling code. Per this issue [2], you have to specify the "rw" or "ro" option at the end of others (sec=krb5 in my case) for them to take effect at all. That is, my sharenfs property ends up being 'sharenfs="sec=krb5,rw=@10.69.69.0/24"'.

Jadelclemens (talk) 07:26, 6 May 2021 (UTC)Reply[reply]

unlock at boot time service with passphrase doesn't work with automount

The systemd service provided does not work with automount when using a passphrase since there is no ordering dependency between it and zfs-mount.service. The only dependency specified is via WantedBy which makes it so that it starts in parallel with zfs-mount. I propose the following systemd unit instead, based on the following issue from the zfs github https://github.com/openzfs/zfs/issues/8750:

/etc/systemd/system/zfs-load-key@.service
[Unit]
Description=Load %I encryption keys
After=zfs-import.target
Before=systemd-user-sessions.service zfs-mount.service
Requires=zfs-import.target
DefaultDependencies=no

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/bash -c 'until (systemd-ask-password "Encrypted ZFS password for %I" --no-tty | zfs load-key %I); do echo "Try again!"; done'

[Install]
WantedBy=zfs-mount.service

It uses DefaultDependencies=no to prevent a cycle which would be introduced by simply using Before.

Nikitau (talk) 16:53, 20 May 2021 (UTC)Reply[reply]

Is the parameter ashift really uneditable after pool creation?

When I run zpool:

the following properties are supported:
	PROPERTY             EDIT   VALUES
	ashift                YES   <ashift, 9-16, or 0=default>

Also, in zpoolprops(8) § DESCRIPTION:

The following properties can be set at creation time and import time, and later changed with the zpool set command:
     ashift=ashift

Chuang (talk) 04:33, 20 June 2021 (UTC)Reply[reply]

Any good references how to enable zstd compression?

I would like to add this information to the Wiki, however I don't seem to find any good references how to use/enable zstd on zfs 2.0.4. If someone has good guides, please let me know. :) Francoism (talk) 06:04, 23 June 2021 (UTC)Reply[reply]

Problematic implications of zfs-mount.service with ZFS on root

I've had a system running with ZFS on root and today I finally switched to zfs-mount-generator and suggest you do the same if you have anything important for/during the boot process on ZFS. In my case, it was /var/log/journal, as I have /var on a non-root dataset. The problem is that systemd-journal-flush.service uses RequiresMountsFor=/var/log/journal, except var.mount doesn't exist until zfs-mount.service runs because systemd doesn't know there's supposed to be a mountpoint there. With the default of Storage=auto in journald.conf(5), this presumably resulted in journald never switching from /run to /var, culminating in a couple months of lost logs, as well as boot history under journalctl --list-boots being completely broken. I only noticed because I compared the output to last reboot on a whim because it didn't feel right, only to find severe discrepancies between the two. I can only assume other services with similar mount dependencies are equally at risk of breakage.

FallenWarrior2k (talk) 14:44, 23 September 2021 (UTC)Reply[reply]

I'd agree with using zfs-mount-generator over zfs-mount for boot logs. I managed to get it to save logs with an override file but it doesn't start logging until after zfs-mount is completed.
#/etc/systemd/system/systemd-journald.service.d/override.conf

[Unit]
Requires=zfs-mount.service
After=zfs-mount.service
Echo-84 (talk) 16:33, 3 January 2022 (UTC)Reply[reply]
If you want to solve this with overrides, you need to put the dependency on systemd-journal-flush.service, not systemd-journald.service itself. Journald normally starts at early boot and initially logs to /run/log/journal. Then, once necessary file systems are mounted (usually ordered with RequiresMountsFor), systemd-journal-flush.services issues a journalctl --flush, which makes journald switch to /var/log/journal. The way you have done it, however, prevents journald from starting at all until ZFS mounting is complete.
In any case, even if you do fix that, it is a workaround, yes, but not a solution. After all, the point of RequiresMountsFor is that you don't have to manually create dependencies for every unit that depends on a mount point, i.e. units declare what directories they need and systemd ensures those are mounted before the unit is started. Journald was just an example; this applies to any unit that will misbehave if its mount dependencies are not honored. This means you'd have to manually inspect every unit file that uses RequiresMountsFor to see if it's affected and add an appropriate override. With zfs-mount-generator all of this works as intended and setup was effortless by just following the steps that already exist in the wiki.
FallenWarrior2k (talk) 17:02, 3 January 2022 (UTC)Reply[reply]

Add instructions on mounting using fstab

openzfs guide uses /etc/fstab instead of zfs-mount service reasoning that it provides precise control than the service. Can another section in ZFS#Configuration be added for instructions with /etc/fstab?

zfs hook in mkinitcpio.conf

Hi, I'm starting to use ZFS and noticed that the zfs hook is required in HOOK= line at /etc/mkinitpcio.conf, I received the error "Mount ZFS filesystems was skipped because of a failed condition check (ConditionPathIsDirectory=/sys/module/zfs)." which after searching around, found out it is required . However there is no mention of this requirement in the paragraphs about installation in this wiki, just for a couple of special cases. I will not add it because I'm just learning about zfs, so maybe I did something wrong or missed something else, so somebody with more experience please review this, thanks! --Roqz (talk) 18:16, 31 May 2022 (UTC)Reply[reply]

So, to add to that previous comment, I'm creating a new pool with new drives, and noticed that I required to do the 'zpool set cachefile=/etc/zfs/zpool.cache POOLNAME' and then to add the zfs hook and regenerate the initcipio image. Otherwise at boot time the pool will not be detected and I needed to manually import it every time. --Roqz (talk) 22:47, 21 December 2022 (UTC)Reply[reply]

ZFS on Linux vs OpenZFS

I see lots of references to ZFS on Linux on this page, and was considering to update them to OpenZFS. The merge/rename was in 2020, references to ZOL around the internet are pretty scarce, and the official ZOL website only has links to OpenZFS releases and documentation. Any thoughts? Akarine (talk) 15:57, 2 September 2022 (UTC)Reply[reply]

add zfs-zed.service to "Using zfs-mount.service"

The section "Using zfs-mount.service" does not mention to enable "zfs-zed.service". Now I made the following experience: I had an available but inactive hot-spare. I pulled one of my mirrored devices out. The hot spare did not turn into "inuse". Only after I started "zfs-zed.service" it started to work automaticaly. IMO activating zfs-zed.service is needed. --PMay (talk) 20:58, 19 January 2023 (UTC)Reply[reply]

Kernel module loading

udev automatically loads the zfs module if it detects a zfs file system on a blockdev. Either on a physical disk or on a loop device.

So before you create your first zfs file system, you have to load the module once manually as zfs cli tools explain:

The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

I think this should be mentioned on the page. Sausix (talk) 20:22, 17 October 2023 (UTC)Reply[reply]

How to downgrade (easily)

There is a TIP on how you can downgrade from the ZFS repo if your Kernel is newer.

The ZFS repo is constantly lagging behind compared to the official Arch repo. Now, how exactly (with what commands) one is supposed to do the downgrade?

There is no kernel packages in the ZFS repo (I believe it's just a brainfart / TYPO in the sentence), they are probably meaning something else. However, that would be handy as it would enable to downgrade more easily with pacman.

But with the usual downgrade methods, one will be faced with a dependency hell. One can not downgrade the kernel as the currently installed zfs modules depends on the current kernel!

There is no easy way (I can find) to downgrade the kernel and install (newer than currently installed) zfs modules. A lot of manual downloading and juggling around is needed. Wild Penguin (talk) 17:23, 12 December 2023 (UTC)Reply[reply]

Actually, easiest is to probably remove zfs-[your kernel], downgrade kernel and re-install zfs. Not the cleanest way, but should work.

But using DKMS is most hassle-free. Wild Penguin (talk) 12:44, 13 December 2023 (UTC)Reply[reply]

Unlock at login time: PAM may be incorrect

#Unlock at login time: PAM may be incorrect. I followed the section's instructions and ended up with a system that did not mount to the encrypted dataset, but just created an unencrypted home/user directory.

I followed this post on Reddit, putting the suggested file alterations at the top of /etc/pam.d/system-auth and /etc/pam.d/su-l.

Should the Reddit instructions be added to this section? Should this section be replaced by them?

Kinifwyne (talk) 23:20, 5 March 2024 (UTC)Reply[reply]