From ArchWiki
Latest comment: 11 September 2023 by Harvie in topic Power loss claims

Adding warning about F2FS potentially being unsafe?

I know F2FS is probably less popular choice than Btrfs (and other systems) for Arch install, but given than btrfs justfully have warnings listed on its page about it's stability, especially in certain circumstances and I had my filesystem become corrupted beyond repair on btrfs before, I think F2FS should be mentioned to have its issues. I recently started fiddling with F2FS on my Arch install, and I had it not only corrupt the file system, but also become impossible to mount with can't find valid checkpoint error, dropping me to emergency shell in initramfs. This happens when I uncleanly shut down the system when the rootfs was mounted and it seems that fsck.f2fs corrupted it even further in subsequent boot (and both -a and -f options aren't helping either). Luckily, I didn't lost any valuable data, but it seems that other were not so lucky with this filesystem, having the same error and losing their data [1], especially when using encryption [2][3]. With that in mind, wouldn't it be good to note that in wiki with possibly a visible warning at the beginning? Not sure if the issue wouldn't be there with 4.15 kernel, but I certainly wouldn't consider the file system 100% safe for regular use, especially in comparison to btrfs that is still hinted as such. Faalagorn / 13:47, 30 January 2018 (UTC)Reply[reply]

I would have appreciated if someone told me that btrfs is not suitable for partitions sized less than 100GB because of how metadata and snapshots are handled. Tallero (talk) 19:18, 5 November 2018 (UTC)Reply[reply]
We probably should move btrfs discussion elsewhere, but the upstream btrfs wiki indicates btrfs is fine below 16GiB with specific options and above 16GiB with the default options. Chunks are allocated 1GB at a time, so I'm not sure why < 100GB would be a problem. Bobpaul (talk) 17:46, 6 November 2018 (UTC)Reply[reply]
If you have no space left, balancing won't start and basically you won't know in advance how much you have to free your disk to make it work again; on ubuntu, apt takes btrfs snapshots on major upgrades without size checking (because yay, snapshots), so it is really easy to fill the disk; this is the case of many other btrfs snapshot utilities. The last time I encountered similar problems, archwiki page about btrfs was not so complete to let me solve the task in less than two or three hours. I can not provide an actual reproduction of the bug because I learned from my mistakes and never used btrfs on small partitions again.
Sincerely, I can not advice anyone a file system that after ten years still does not display the correct free space amount in df. Anyway, I posted about this same problem yesterday in btrfs discussion page. It is so common that "solutions" are present in its faq, too. Tallero (talk) 23:41, 6 November 2018 (UTC)Reply[reply]
(I moved your reply to stay in the thread.) That's a fair complaint, but is a problem with lack of freespace, not partition size. ZFS, btrfs, and several other file systems struggle when the partition surpasses a certain level of full, even on several TB arrays. These file systems have their place, but that place is usually "kept below 50% utilization." I do disagree with SuSe and Ubuntu choosing this file system as default for desktop users, but it's a great fit for my home server. Bobpaul (talk) 01:10, 7 November 2018 (UTC)Reply[reply]
If there are citable issues (known bugs, known configurations that can be problematic, etc) then I think it's reasonable to add a warning. If it's simply you personally had a problem, then that's not very useful; it's hard for someone to look at an individuals anecdote and know whether it was a filesystem problem or an external issue like a hardware failure. That said, it looks like have you citations for some known issue(s) so go ahead and write up a warning if you feel it's justified. Don't be afraid to edit wikis; that's what why they're here. Bobpaul (talk) 18:02, 6 November 2018 (UTC)Reply[reply]
I think a big problem with F2FS is the general lack of citable anything. It's my impression that F2FS is primarily used in some Android smartphones, and most of the knowledge and experience with actually using F2FS is kept inside Samsung and Google. I've had a ton of weird problems with F2FS, and I've been entirely unable to find relevant information. Rosvall (talk) 08:37, 16 September 2022 (UTC)Reply[reply]
I do have issues with F2FS very frequently, especially when an unclean shutdown happened, especially if using compression. This is a very easy to reproduce bug. Just create an Arch install with a F2FS root, install a bunch of things, force shutdown your computer (or VM), and restart. If it doesn't fail then, it'll fail afterwards. Aviallon (talk) 07:22, 24 September 2020 (UTC)Reply[reply]

Power loss claims

Page says following: "F2FS has a weak fsck that can lead to data loss in case of a sudden power loss"

I had some of the described issues with f2fs-tools 0.15, but it seems to be better with 0.16. Haven't checked the changelog of f2fs-tools, but situation might be bit better now. --Harvie (talk) 16:01, 11 September 2023 (UTC)Reply[reply]

USB-flash keys & memory-cards...

One could get the impression that F2FS is not suitable for USB-flash drives and other flash memory-cards as it is stated in this page that FSFS is only suitable for FTL-based flash drives and that this only include SCSI/SATA/PCIe/NVMe drives. Is this really true? That it's not suitable for USB-flash drives? Some claims it's exactly on these "stupid" flash-drives that F2FS it strongest. This should be more clear on the page.

—This unsigned comment is by MrCalvin (talk) 2023-03-05T14:51:37. Please sign your posts with ~~~~!}}