Where does this file go and what other steps are required?
I would expect: /etc/systemd/system/
Then: systemctl enable srv-nfs4-media.mount
In think in the page is a typo, the page should state resume hook instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? Ezzetabi (talk) 09:49, 18 August 2015 (UTC)
Automatic build script
I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. Severach (talk) 10:07, 9 August 2015 (UTC)
- I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- Alad (talk) 12:46, 9 August 2015 (UTC)
- Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. Das j (talk) 20:01, 10 January 2016 (UTC)
- On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. Warai otoko (talk) 04:43, 2 September 2015 (UTC)
The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)
@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package
@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61
There seems to be a new systemd target zfs-import.target which must be enabled in order to auto-mount? Otherwise zfs-mount.service will be executed before zfs-import-cache.service on my machine and nothing will be mounted. --Swordfeng (talk) 12:55, 8 November 2017 (UTC)
I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself. starfry (talk) 16:07, 31 May 2018 (UTC)
- I’ve set up ZFS recently and the systemctl enable commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is systemctl preset […] a “required command line?” —Auerhuhn (talk) 16:33, 31 May 2018 (UTC)
That's why I never deleted anything from the page. I found that the systemctl enable commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me). starfry (talk) 16:07, 31 May 2018 (UTC)
The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.
There is a good blog from Oracle about when and why (or not), to scrub: https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2
I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this? Mouseman — (talk) 13:15, 21 October 2018 (UTC)
- I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — Auerhuhn (talk) 13:51, 21 October 2018 (UTC)
- I was curious when I saw the factual accuracy banner. I've been reading Aaron Toponce's guide to ZFS administration which is an extremely thorough walkthrough. In his chapter on scrubbing and resilvering he lists two heuristics. He suggests, "[t]he recommended frequency at which you should scrub the data depends on the quality of the underlying disks. If you have SAS or FC disks, then once per month should be sufficient. If you have consumer grade SATA or SCSI, you should do once per week." That might be the source of the suggestion? I'd love to hear more from people who have more experience with ZFS. --Metatinara (talk) 04:43, 23 November 2018 (UTC)
- Your reply reminded me that I wanted to edit the page as discussed above. I agree that guide is very good, it has helped me greatly when I got started with ZFS. But again, I have to challenge the advise. On what basis should consumer grade harddisks be scrubbed once a week? As far as I am concerned, there is no evidence, no data to support such a claim. How likely is bitrot to occur due to degradation or solar flares? EMP? How many bits can flip before data becomes irrepairable? If we have those numbers from different vendors in different situations, we can actually make an educated guess at how often scrubs should take place. I don't know of any such data or research. I know I am only one guy with limited experience but here it is: I have been using ZFS for about 6 years in three different configurations, all consumer or prosumer hardware. Before that, I used parchive and later par2 for I don't know, 20 odd years, to create 10% parity sets on important live data and offline backups, so that I could repair corruption. I would stash away old harddisks as backups like this. In my time, I had to use par2 only once because a hard drive went bad and ran out of realocated sectors. And it wasn't even a old disk, it was still in warranty. Not once did a scrub actually have to repair something. Not once did I ever find evidence of bitrot. Doesn't mean it doesn't exist because I know it does, but based on my own experience, I think it is extremely unlikely to occur and when it does, ZFS can fix it unless it's too much; but how long does that take? So based on my own experience, I am running it once every few months and I'll likely decrease the frequency to once every 6 months or so.Mouseman (talk)
- Those are some great thoughts and anecdotes; thank you for sharing them! I think the way you went about your edit is good. Giving some resources to help you make the decision seems like a better approach than "do this without any justification." I appreciate the direct approach that most of the Arch Wiki articles take, but in this case, it seems like more information and less prescription is the better approach. — Metatinara (talk) 17:36, 24 November 2018 (UTC)
xattr=sa under tuning?
I've seen a lot of people setting xattr=sa which disables the creation of hidden subdirectories for storing extended attributes and stores them directly in inodes instead. This has performance advantages and makes the output of snapshot diffs cleaner. Should we add it to the Tuning section? Hinzundcode (talk) 22:12, 20 March 2019 (UTC)
Why use partition labels?
The page mentions creating a pool using partition labels, why? It shouldn't be recommended/needed to create partitions as ZFS will create/overwrite the partition table. Should this section be removed? Francoism (talk) 10:35, 31 March 2019 (UTC)
Unlock encrypted home at boot time
There is an example /etc/systemd/system/zfskey-tank@.service but systemd can't deal with services with slashes in them:
systemctl enable zfskey-zroot@zroot/data/home Failed to enable unit: File zfskey-zroot@zroot/data/home: Invalid argument
There is a second example /etc/systemd/system/zfs-load-key.service that has
ExecStart=/usr/bin/bash -c '/usr/bin/zfs load-key -a'
That also does not work in the case that a password is needed: it doesn't prompt, it just fails at startup. So I seem to be stuck with having one password for "all" encrypted datasets (ok since there's only one) and explicitly prompting for it. At least I got that working on one system recently, but forgot the details, and now it's down with a hardware failure and I'm trying to solve it again on another system.
I would like to have a way to log in as a particular user and then unlock that user's home directory (which should be a separate dataset) with the same password. From the command line as part of the normal login process. Any ideas?
- Please sign your comments. You should use dashes instead,
firstname.lastname@example.org. If you want to load all keys just use
zfs-load-key.service. Make sure the datasets are mounted at boot.
- Francoism (talk) 12:54, 14 December 2019 (UTC)
What does this actually mean in practice? Does it mean that updates block for everyone using Arch? Or that updates for zfs users are blocked? Or that updates for zfs users go ahead but things break, and if so, what? Or something else? Beepboo (talk) 10:39, 22 March 2020 (UTC)
So it appears with the dkms method at least, upgrades of package linux always succeeds, assuming non-root filesystems, but zfs modules won't load until the dkms package supports the version of kernel. This means that you'll boot, but zfs list etc will fail. Beepboo (talk)
Tip: Add an IgnorePkg entry to pacman.conf to prevent these packages from upgrading when doing a regular update.
Why would this be useful? I.e. why wouldn't we want these packages upgrading?
Too much memory section
I think the entry on too much memory could do with being revised. E.g. should the entries (min/max) really be added as kernel parameters (e.g. grub) or via options entries in /etc/modprobe.d?
Is it really beneficial to set
aclinherit=passthrough on top of
Here's the explanation of the attribute from the latest ZFS (0.8.3) manual:
aclinherit=discard|noallow|restricted|passthrough|passthrough-x Controls how ACEs are inherited when files and directories are created. discard does not inherit any ACEs. noallow only inherits inheritable ACEs that specify "deny" permissions. restricted default, removes the write_acl and write_owner permissions when the ACE is inherited. passthrough inherits all inheritable ACEs without any modifications. passthrough-x same meaning as passthrough, except that the owner@, group@, and everyone@ ACEs inherit the execute permission only if the file creation mode also requests the execute bit. When the property value is set to passthrough, files are created with a mode determined by the inheritable ACEs. If no inheritable ACEs exist that affect the mode, then the mode is set in accordance to the requested mode from the application. The aclinherit property does not apply to POSIX ACLs.
So the difference between
passthrough is that the latter preserves
write_owner, which seem to be Solaris standards. And it explicitly says 'The aclinherit property does not apply to POSIX ACLs'.
It seems that the effect of this kernel module option was deprecated and then removed by this PR: https://github.com/openzfs/zfs/pull/9609/files
From the documentation/comment bit (which has been also removed by that commit) I assume that now the kernel module respects block device level settings for system itself.
I'm not sure where would be an appropriate place to add this information, but I struggled for a little while trying to find out how I could pass additional options to export since we don't use /etc/exports. After looking at the manpage  I figured out that any options set in the sharenfs property would be used - I think this alone would be useful to put in, even as a blurb.
Also, it looks like there's a bug in the sharenfs handling code. Per this issue , you have to specify the "rw" or "ro" option at the end of others (sec=krb5 in my case) for them to take effect at all. That is, my sharenfs property ends up being 'sharenfs="sec=krb5,email@example.com/24"'.