Talk:Solid state drive

From ArchWiki

Don't use noop

The noop scheduler will perform slow but as a result it will greatly frees up CPU cycles. This in the real world will not increase the speed of your read/writes compared to CFS but instead consume less CPU resources. You can benchmark the deadline scheduler which MAY increase performance in some circumstances. By real world benchmarks, I mean anything but hdparm. —This unsigned comment is by Tama00 (talk) 22:38, 21 December 2011‎. Please sign your posts with ~~~~!

Interesting assertion... do you have any data or a source to back it up?
Graysky 17:20, 21 December 2011 (EST)
It seems that the cfq scheduler already knows what to do when SSD is detected, so there is no use to change it.
raymondcal 2012, may 29
CFQ has some optimizations for SSDs and if it detects a non-rotational media which can support higher queue depth (multiple requests at in flight at a time), then it cuts down on idling of individual queues and all the queues move to sync-noidle tree and only tree idle remains. This tree idling provides isolation with buffered write queues on async tree.
https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt
ushi 2013, November 03
So does anyone have any good data? My research says deadline is best and that "cfq has some optimizations" doesn't mean it's better than others.
MindfulMonk (talk) 22:38, 5 July 2014 (UTC)Reply[reply]

TRIM and RAID

The wiki article within the warning in the section Enable continuous TRIM by mount flag says: A possible patch has been posted on July 19, 2015."

The quoted article seems to be saying that there is a serious kernel bug which impacts when SSD's are being used with linux software raid.

Is that a confirmed kernel bug? If it is then shouldnt the wiki point this out? —This unsigned comment is by Sja1440 (talk) 27 September 2015 08:26. Please sign your posts with ~~~~!

The bug was manifested for particular brand SSD bios only, which were blacklisted. Since TRIM is a standard and other brands work fine, this issue was not regarded a kernel bug to my knowledge. Nonetheless, they may (have) merge(d) the patch to work-around the bios bugs. If someone has a related bug report where it is tracked or kernel commit, it would be useful to add, I agree. -Indigo (talk) 09:23, 27 September 2015 (UTC)Reply[reply]
I found this Slashdot discussion (linked with [1]) where there's a link to a kernel commit, although they're talking of a bug in the firmware too, and in fact the devices seem to be still blacklisted. — Kynikos (talk) 02:26, 28 September 2015 (UTC)Reply[reply]
Samsung's patch about data corruption are now merged, but full blacklist still here in Linux 4.5 https://github.com/torvalds/linux/blob/v4.5/drivers/ata/libata-core.c#L4223 for a lot of SDD brand, in particular all the popular Samsung 8 series (840|850 EVO|PRO). I still cannot find any information about the source issue and when it will be whitelisted. The article should warn user that all those SSD models should be avoided until a solution. Note that also --Nomorsad (talk) 11:03, 26 March 2016 (UTC)Reply[reply]
The article does warn; I have updated with your 4.5 link, thanks for follow-up. They added one more drive model to the blacklist since then.[2] --Indigo (talk) 11:19, 26 March 2016 (UTC)Reply[reply]

fstrim, btrfs and other fs

IMO, the table with trim support should note that running fstrim on btrfs produces different output than on other filesystems. If I run fstrim multiple times in a row for example on ext4, the first time I get "freed X bytes", and on the consequent runs I get "freed 0 bytes" and the subsequent runs are done immediately. On btrfs, however, I always get the same number and it takes the same time as the first run. I didn't tried on other filesystems than these two, but it can be confusing and suggest that there is some misconfiguration for users who are not aware of this. --Zopper (talk) 12:17, 1 Sep 2016 (UTC)

fstrim on non-ssd, e.g. QEMU

I'm not sure if this fit's into this articel, or better another, but I think it should be mentioned, that the trim could be useful on non-ssd disks, e.g. on QEMU disks with harddrives. This is of course not the same trim like on ssd, but it't the same intention: inform the storage system about unused blocks, e.g. to avoid defragmenation, shrink the image. Example from an KVM system with QEMU disks, labeled not as SSD but with SAS storage on the product page:

$ lsblk --discard       
NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda            0        4K       1G         0
├─sda2         0        4K       1G         0
$  sudo fstrim -v -a                                         
/: 390,8 MiB (409780224 bytes) trimmed on /dev/sda2     
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0-part2 -> ../../sda2

Ua4000 (talk) 18:00, 25 January 2021 (UTC)Reply[reply]

Fstrim doesn't run on mounted drives that have been mounted via systemd-mount units?

The Article states "The util-linux package provides fstrim.service and fstrim.timer systemd unit files. Enabling the timer will activate the service weekly. The service executes fstrim(8) on all mounted filesystems on devices that support the discard operation."

I just checked my journal to see if periodic fstrim works as supposed by running journalctl -u fstrim i then noticed that only devices mounted via fstab seems to be trimmed since it would say: Finished Discard unused blocks on filesystems from /etc/fstab. Is the Wiki wrong about this?

Noclueguy (talk) 17:15, 4 March 2024 (UTC)Reply[reply]

Default behavior is changed in upstream at v2.33 and later modified in v2.36.
It currently reads /etc/fstab first, then tries /proc/self/mountinfo if fstab is unavailable or empty. Piroro-hs (talk) 07:20, 7 March 2024 (UTC)Reply[reply]