Difference between revisions of "Talk:ZFS"

From ArchWiki
Jump to navigation Jump to search
(Add my comment about DKMS)
 
(18 intermediate revisions by 11 users not shown)
Line 28: Line 28:
  
 
:: I've recreated it. I use this script as well. --[[User:Chungy|Chungy]] ([[User talk:Chungy|talk]]) 02:49, 3 September 2015 (UTC)
 
:: I've recreated it. I use this script as well. --[[User:Chungy|Chungy]] ([[User talk:Chungy|talk]]) 02:49, 3 September 2015 (UTC)
 +
 +
== Configuration ==
 +
 +
The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command
 +
 +
# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)
 +
 +
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 15:21, 27 October 2016 (UTC)
 +
 +
 +
@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package
 +
 +
[[User:Justin8|Justin8]] ([[User talk:Justin8|talk]]) 22:04, 27 October 2016 (UTC)
 +
 +
@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61
 +
 +
[[User:Z3ntu|Z3ntu]] ([[User talk:Z3ntu|talk]]) 08:34, 28 October 2016 (UTC)
 +
 +
 +
There seems to be a new systemd target ''zfs-import.target'' which must be enabled in order to auto-mount? Otherwise ''zfs-mount.service'' will be executed before ''zfs-import-cache.service'' on my machine and nothing will be mounted. --[[User:Swordfeng|Swordfeng]] ([[User talk:Swordfeng|talk]]) 12:55, 8 November 2017 (UTC)
 +
 +
I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself.
 +
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)
 +
: I’ve set up ZFS recently and the ''systemctl enable'' commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is ''systemctl preset […]'' a “required command line?” —[[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 16:33, 31 May 2018 (UTC)
 +
 +
That's why I never deleted anything from the page. I found that the ''systemctl enable'' commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me).
 +
[[User:starfry|starfry]] ([[User talk:starfry|talk]]) 16:07, 31 May 2018 (UTC)
 +
 +
== Scrub ==
 +
 +
The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.
 +
 +
There is a good blog from Oracle about when and why (or not), to scrub:
 +
https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2
 +
 +
I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this?
 +
[[User:Mouseman|Mouseman]] — ([[User talk:Mouseman|talk]]) 13:15, 21 October 2018 (UTC)
 +
 +
: I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — [[User:Auerhuhn|Auerhuhn]] ([[User talk:Auerhuhn|talk]]) 13:51, 21 October 2018 (UTC)
 +
 +
:: Thanks for the reply. I agree, although I was thinking to include the 'Should I do this' too. I'll let this sit here for a few days and see what else turns up and edit the page next week or weekend. — [[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]]) 17:38, 21 October 2018 (UTC)
 +
 +
: I was curious when I saw the factual accuracy banner. I've been reading [https://pthree.org/2012/04/17/install-zfs-on-debian-gnulinux/ Aaron Toponce's guide to ZFS administration] which is an extremely thorough walkthrough. In his [https://pthree.org/2012/12/11/zfs-administration-part-vi-scrub-and-resilver/ chapter on scrubbing and resilvering] he lists two heuristics. He suggests, "[t]he recommended frequency at which you should scrub the data depends on the quality of the underlying disks. If you have SAS or FC disks, then once per month should be sufficient. If you have consumer grade SATA or SCSI, you should do once per week." That might be the source of the suggestion? I'd love to hear more from people who have more experience with ZFS. --[[User:Metatinara|Metatinara]] ([[User talk:Metatinara|talk]]) 04:43, 23 November 2018 (UTC)
 +
 +
:: Your reply reminded me that I wanted to edit the page as discussed above. I agree that guide is very good, it has helped me greatly when I got started with ZFS. But again, I have to challenge the advise. On what basis should consumer grade harddisks be scrubbed once a week? As far as I am concerned, there is no evidence, no data to support such a claim. How likely is bitrot to occur due to degradation or solar flares? EMP? How many bits can flip before data becomes irrepairable? If we have those numbers from different vendors in different situations, we can actually make an educated guess at how often scrubs should take place. I don't know of any such data or research. I know I am only one guy with limited experience but here it is: I have been using ZFS for about 6 years in three different configurations, all consumer or prosumer hardware. Before that, I used parchive and later par2 for I don't know, 20 odd years, to create 10% parity sets on important live data and offline backups, so that I could repair corruption. I would stash away old harddisks as backups like this. In my time, I had to use par2 only once because a hard drive went bad and ran out of realocated sectors. And it wasn't even a old disk, it was still in warranty. Not once did a scrub actually have to repair something. Not once did I ever find evidence of bitrot. Doesn't mean it doesn't exist because I know it does, but based on my own experience, I think it is extremely unlikely to occur and when it does, ZFS can fix it unless it's too much; but how long does that take? So based on my own experience, I am running it once every few months and I'll likely decrease the frequency to once every 6 months or so.[[User:Mouseman|Mouseman]] ([[User talk:Mouseman|talk]])
 +
 +
::: Those are some great thoughts and anecdotes; thank you for sharing them! I think the way you went about your edit is good. Giving some resources to help you make the decision seems like a better approach than "do this without any justification." I appreciate the direct approach that most of the Arch Wiki articles take, but in this case, it seems like more information and less prescription is the better approach. — [[User:Metatinara|Metatinara]] ([[User talk:Metatinara|talk]]) 17:36, 24 November 2018 (UTC)
 +
 +
== xattr=sa under tuning? ==
 +
 +
I've seen a lot of people setting xattr=sa which disables the creation of hidden subdirectories for storing extended attributes and stores them directly in inodes instead. This has performance advantages and makes the output of snapshot diffs cleaner. Should we add it to the Tuning section?
 +
[[User:Hinzundcode|Hinzundcode]] ([[User talk:Hinzundcode|talk]]) 22:12, 20 March 2019 (UTC)
 +
 +
== Why use partition labels? ==
 +
 +
The page mentions creating a pool using partition labels, why? It shouldn't be recommended/needed to create partitions as ZFS will create/overwrite the partition table. Should this section be removed?
 +
[[User:Francoism|Francoism]] ([[User talk:Francoism|talk]]) 10:35, 31 March 2019 (UTC)

Latest revision as of 10:35, 31 March 2019

Bindmount

Where does this file go and what other steps are required?

I would expect: /etc/systemd/system/

Then: systemctl enable srv-nfs4-media.mount

Msalerno (talk) 02:36, 22 October 2015 (UTC)

resume hook

In think in the page is a typo, the page should state resume hook instead of hibernate, but the limitation still applies. Can anyone confirm that the resume hook must appear before filesystems? Ezzetabi (talk) 09:49, 18 August 2015 (UTC)

Automatic build script

I'm fine with deleting the scripts. I only posted it because graysky's script never worked for me. Long stuff like this would be useful if the ArchWiki featured roll-up text. Severach (talk) 10:07, 9 August 2015 (UTC)

I'd suggest to maintain it in a github repo. You get better versioning, syntax highlighting, cloning, etc. -- Alad (talk) 12:46, 9 August 2015 (UTC)
...or an anonymous gist if you don't have nor want to create a GitHub account. — Kynikos (talk) 08:40, 10 August 2015 (UTC)
Isn't that exactly what DKMS is doing? There DKMS packages in the AUR. Das j (talk) 20:01, 10 January 2016 (UTC)

Automatic snapshots

zfs-auto-snapshot-gitAUR seems to have disappeared from the AUR. I haven't been able to find any information on why it was deleted; does anyone know? In any case, it should probably be removed from this page. warai otoko (talk) 03:21, 2 September 2015 (UTC)

On further inspection, looks like it may have gotten lost in the transition to AUR4. It should be resubmitted if we want to continue recommending it here; I've found it useful, at any rate. Warai otoko (talk) 04:43, 2 September 2015 (UTC)
I've recreated it. I use this script as well. --Chungy (talk) 02:49, 3 September 2015 (UTC)

Configuration

The configuration section has WAY to few infos about what systemd unit(s) to enable. Thanks to @kerberizer I finally managed to get the mounts working with the command

# systemctl preset $(tail -n +2 /usr/lib/systemd/system-preset/50-zfs.preset | cut -d ' ' -f 2)

Z3ntu (talk) 15:21, 27 October 2016 (UTC)


@Z3ntu I have ZFS running on a few systems and never had to enable any services, it should work by default, if not then file a bug on the package

Justin8 (talk) 22:04, 27 October 2016 (UTC)

@Justin8 I tried it both in a virtual machine and on a physical computer that when you don't enable any services (I use "zfs-linux" from the archzfs repo), create a pool and reboot, it doesn't exist anymore (zpool status) and the pools don't get mounted without the zfs-mount service (or whatever it is called). I found a related issue on github: https://github.com/archzfs/archzfs/issues/61

Z3ntu (talk) 08:34, 28 October 2016 (UTC)


There seems to be a new systemd target zfs-import.target which must be enabled in order to auto-mount? Otherwise zfs-mount.service will be executed before zfs-import-cache.service on my machine and nothing will be mounted. --Swordfeng (talk) 12:55, 8 November 2017 (UTC)

I think the section about systemd units should be rewritten to remove the old stale information and bring the required command-line to the fore. As mentioned on the github issue linked from the page and also repeated above by @Z3ntu. I've just been experimenting with ZFS and wasted a little time on this that could have been avoided if the page had been updated back in 2016. I haven't cahnged the page except to add the required command line there in case there is still relevance to the other text that I don't realise. I have just started using ZFS myself. starfry (talk) 16:07, 31 May 2018 (UTC)

I’ve set up ZFS recently and the systemctl enable commands from the Wiki page have worked fine for me so far. What do you mean by “old stale information,” and why is systemctl preset […] a “required command line?” —Auerhuhn (talk) 16:33, 31 May 2018 (UTC)

That's why I never deleted anything from the page. I found that the systemctl enable commands worked up to the point that I rebooted. I discovered that the zpools were not imported on boot. Searching for information led me to the command-line on the github post and that did work for me. I thought I should raise its profile a little because I wasted a few hours on it. Actually I realised also I didn't enable the 3 services listed separately - just the ones at the top of the section (there are 6 services referenced by the github issue). But that probably is why I had the problem! Like I said, I have only just started with ZFS (I am testing in a VM with files rather than real devices) and it is possible that doing it in the small hours of the morning wasn't a good idea. The info on the page as it was left me asking more questions which were answered by the github issue and, in particular, that command line sequence. You don't need that command-line but you do need the systemd services that it enables (you could enable them by hand if you preferred). Maybe you don't need all six of them. But, as it was, it wasn't clear (to me). starfry (talk) 16:07, 31 May 2018 (UTC)

Scrub

The advise to scrub at least once a week is completely unsubstantiated and probably incorrect in almost all situations. Advise should be acompanied by some argumentation and preferably links to support the claim.

There is a good blog from Oracle about when and why (or not), to scrub: https://blogs.oracle.com/wonders-of-zfs-storage/disk-scrub-why-and-when-v2

I wanted to edit the page to include the most important bits about scrubbing, but figured I'd throw it up for discussion first, what do people think about this? Mouseman — (talk) 13:15, 21 October 2018 (UTC)

I have no strong opinion but the most pragmatic/helpful part of Oracle’s article appears to be the list of three tips near the end. I feel paraphrasing those three points in the wiki would be a good thing, together with an external link to Oracle’s article (which is pretty good) to cover the details. — Auerhuhn (talk) 13:51, 21 October 2018 (UTC)
Thanks for the reply. I agree, although I was thinking to include the 'Should I do this' too. I'll let this sit here for a few days and see what else turns up and edit the page next week or weekend. — Mouseman (talk) 17:38, 21 October 2018 (UTC)
I was curious when I saw the factual accuracy banner. I've been reading Aaron Toponce's guide to ZFS administration which is an extremely thorough walkthrough. In his chapter on scrubbing and resilvering he lists two heuristics. He suggests, "[t]he recommended frequency at which you should scrub the data depends on the quality of the underlying disks. If you have SAS or FC disks, then once per month should be sufficient. If you have consumer grade SATA or SCSI, you should do once per week." That might be the source of the suggestion? I'd love to hear more from people who have more experience with ZFS. --Metatinara (talk) 04:43, 23 November 2018 (UTC)
Your reply reminded me that I wanted to edit the page as discussed above. I agree that guide is very good, it has helped me greatly when I got started with ZFS. But again, I have to challenge the advise. On what basis should consumer grade harddisks be scrubbed once a week? As far as I am concerned, there is no evidence, no data to support such a claim. How likely is bitrot to occur due to degradation or solar flares? EMP? How many bits can flip before data becomes irrepairable? If we have those numbers from different vendors in different situations, we can actually make an educated guess at how often scrubs should take place. I don't know of any such data or research. I know I am only one guy with limited experience but here it is: I have been using ZFS for about 6 years in three different configurations, all consumer or prosumer hardware. Before that, I used parchive and later par2 for I don't know, 20 odd years, to create 10% parity sets on important live data and offline backups, so that I could repair corruption. I would stash away old harddisks as backups like this. In my time, I had to use par2 only once because a hard drive went bad and ran out of realocated sectors. And it wasn't even a old disk, it was still in warranty. Not once did a scrub actually have to repair something. Not once did I ever find evidence of bitrot. Doesn't mean it doesn't exist because I know it does, but based on my own experience, I think it is extremely unlikely to occur and when it does, ZFS can fix it unless it's too much; but how long does that take? So based on my own experience, I am running it once every few months and I'll likely decrease the frequency to once every 6 months or so.Mouseman (talk)
Those are some great thoughts and anecdotes; thank you for sharing them! I think the way you went about your edit is good. Giving some resources to help you make the decision seems like a better approach than "do this without any justification." I appreciate the direct approach that most of the Arch Wiki articles take, but in this case, it seems like more information and less prescription is the better approach. — Metatinara (talk) 17:36, 24 November 2018 (UTC)

xattr=sa under tuning?

I've seen a lot of people setting xattr=sa which disables the creation of hidden subdirectories for storing extended attributes and stores them directly in inodes instead. This has performance advantages and makes the output of snapshot diffs cleaner. Should we add it to the Tuning section? Hinzundcode (talk) 22:12, 20 March 2019 (UTC)

Why use partition labels?

The page mentions creating a pool using partition labels, why? It shouldn't be recommended/needed to create partitions as ZFS will create/overwrite the partition table. Should this section be removed? Francoism (talk) 10:35, 31 March 2019 (UTC)