Talk:JFS

From ArchWiki
(Redirected from Talk:JFS Filesystem)
Latest comment: 21 April 2021 by Mogsie in topic JFS Performance

Initial draft

I have left out two points from the JFS wiki article:

  1. Changing of virtual memory parameters to optimize file system access.
  2. Claims of JFS 'losing' files.

The reason that I have not included ways to optimize VM parameters is due to the fact that the JFS port itself has been optimized for the Linux kernel using default values for parameters like /proc/sys/vm/swappiness. Also, changing these parameters can adversely affect certain programs and machines with certain work loads. If people really want to know how to do this type of thing, I'll consider writing up a separate wiki article on that type of optimization for file systems.

As for claims made about JFS loosing files, I have never experienced in the time I've been using JFS; and I am unable to find sources detailing this problem beyond that of hearsay in forum posts. My personal feeling is that these claims of JFS loosing files are probably the result of a badly written script, backup process or careless command execution. Personally, I have had files 'go missing' on me a few times only to realize that I had done something to erase them. This seems far more likely an explanation than JFS just up-and-remove files at random without the file system itself being damaged in some way. If someone does indeed have this problem, send me an email outlining all the details surrounding the lose and I will see about trying to replicate the problem.

I have also seen a claim of JFS 'ruining' an 'expensive' hard drive with very valuable data. This person went on to say how he got a quote that said it would be 30k to recover the data, but that it was too expensive, etc. etc. I don't believe this story. How come this data wasn't backed up if it was so extremely valuable? I have accidentally deleted a JFS partition and I was still able to recover much of the data from that drive using jfsrec. Anyway, I am hoping these claims don't work their way into this JFS article without some proper citations to actual file systems tests whose results can be replicated.

--PDExperiment626

Regarding point two: JFS users needed for testing Feel free to ask me for anything else that I didn't think of when posting that.

--byte

Byte, thanks for bringing that to my attention; I have noted the issue in the JFS article. However, as I noted in JFS users needed for testing this may very well be a problem due to the device mapper subsystem or a faulty package install. I was able to replicate the problem only on a system using the device mapper, and the issue was fixed by reinstalling all the packages that used the man3 directory. Anyway, thanks again for posting the details so that the problem could be replicated :).

--PDExperiment626

Shouldn't nodiratime be included with the noatime tip? Admittedly, it doesn't give as much of a performance increase as noatime, but, depending on what you're doing, it CAN still be a substantial enhancement.

Also, what exactly do you mean by "more testers"? People to purposefully crash systems? I have also been using JFS on everything but /boot (and one oddball partition) for a few months now (3 large standalone partitions, and 4 in an LVM), have had a number of crashes, and, to my knowledge, at least, have never lost anything whatsoever.

--Grndrush 16:06, 25 March 2008 (EDT)

Link 7

Are we sure link 7 is relevent? According to: http://en.wikipedia.org/wiki/JFS_(file_system) HP-UX has another, different filesystem named JFS that is actually an OEM version of Veritas Software's VxFS. I think link 7 may be refering to the wrong JFS.

—This unsigned comment is by Wyxj69vm (talk) Revision as of 18:52, 21 May 2009. Please sign your posts with ~~~~!

nodiratime

Grndrush: enabling noatime also enables nodiratime.

—This unsigned comment is by Wyxj69vm (talk) 18:56, 21 May 2009. Please sign your posts with ~~~~!

JFS losing files.

I've found some evidence of JFS losing files: https://www.usenix.org/events/usenix05/tech/general/full_papers/prabhakaran/prabhakaran.pdf

Read the section on JFS from page 14 to 15.

—This unsigned comment is by Wyxj69vm (talk) 11:53, 22 May 2009. Please sign your posts with ~~~~!

I've reproduced the incident of JFS losing files as described in that pdf, and have edited the wiki accordingly.
—This unsigned comment is by 1pwl1z8h (talk) 14:54, 1 June 2009. Please sign your posts with ~~~~!

CPU Usage

I have experienced much less CPU usage and faster reads with ext4 and the default I/O scheduler, than with JFS and the deadline I/O scheduler. I feel the claim that JFS uses less CPU may be slightly exaggerated, however I am hesitant to edit the Wiki based on my results alone. I have also had some problems with losing files.

I have edited the Wiki to include it's noted slowdown when working with many files, as this is verified by at least two sources, one of which is in the references section. I added one more reference as well.

—This unsigned comment is by Lupusarcanus (talk) 01:28, 14 April 2011. Please sign your posts with ~~~~!

I use JFS for years and strongly believe that JFS use much less CPU, especially after long years usage. For SSD disks, the fragmented files are nolonger issue. I have not experienced file lost. Triplc (talk) 04:33, 18 October 2017 (UTC)Reply[reply]
"JFS use much less CPU" means less CPU; *does not* mean faster. I believe it uses less RAM too. It is good for low end computers, or laptop batteries. Triplc (talk) 10:30, 28 January 2021 (UTC)Reply[reply]

Definitely slower than ext4

The article is a bit misleading. I've been using JFS for a week with the Deadline scheduler and this is definitely slower than ext4 and btrfs even. A lot slower than ext4.

Optimizing the pacman database takes about 3 times longer, resolving dependencies 5, installing the packages may be the same, have not measured it, but feels slower as well. Maybe we should do some proper research on this? Run some real benchmarks on clean systems and discuss them in the forum or elsewhere?

Also I should add that CPU usage remains exactly the same on most operations.

—This unsigned comment is by CarterCox (talk) Revision as of 07:19, 29 April 2018. Please sign your posts with ~~~~!

JFS Performance

As others have noted, useing deadline scheduler in 2021 drastically reduces jfs performance. BFQ however lets it live up to it's "fast" claim. IDK where the recommendation to use deadline came from, but it's very out of date and needs to be changed.

—This unsigned comment is by Mogsie (talk) 03:22, 3 April 2021‎ (UTC). Please sign your posts with ~~~~!Reply[reply]

Do you got any sources for that? References always add to the why-aspect of the ArchWiki and encourage the user to read up on the topic.
-- NetSysFire (talk) 03:42, 3 April 2021 (UTC)Reply[reply]
Only my own testing moving 2TB+ with rsync between JFS drives. Using deadline worked fine until load was induced on the source drive, then everything ground to about 6MB/s. Changing the scheduler to BFQ on both source and destination drives instantly resumed full speed rsync copy at above 110MB/s. I realise there are probably no documented proofs in the world in 2021 in a quotable blog concerning JFS performance, but deadline itself has changed in the kernel since the original recommendation was made and is worthy of scepticism as out of date information.
—This unsigned comment is by Mogsie (talk) 03:56, 3 April 2021‎ (UTC). Please sign your posts with ~~~~!Reply[reply]

Update after a bit of time testing and finding the problems with JFS: The performance problem with JFS come when the drives are under heavy load (100% I/O), and is most apparent when all available system memory is used (for cache/buffers etc). The "new" multi-queue schedulers seem to expect the filesystem to handle multiple requests gracefully. JFS appears to only handle one queue. As a result a single core gets used by an apparently inefficient k-flush which is either being refilled faster than it can finish, or for some reason is completing very slowly (or both?). The I/O queue is a little in depth, but the end result is a blocked kernel flush that results in very slow reads/writes from the drive. My uninformed guess is that JFS expects a static queue, but the mq schedulers keep changing the requests when they can't complete in time (could be 100% wrong on that). BFQ/Kyber only slightly help by apparently scattering requests in that flush (only a guess) more than mq-deadline, but the results still aren't great under load. I haven't tried "none" as a scheduler. If anyone wants to experiment, that's probably the only remaining option to get it working well under load, but I suspect it will suffer the same problems.

—This unsigned comment is by Mogsie (talk) 16:56, 21 April 2021‎ (UTC). Please sign your posts with ~~~~!Reply[reply]