Talk:Solid state drive

From ArchWiki
Latest comment: 25 January 2021 by Ua4000 in topic fstrim on non-ssd, e.g. QEMU

Don't use noop

The noop scheduler will perform slow but as a result it will greatly frees up CPU cycles. This in the real world will not increase the speed of your read/writes compared to CFS but instead consume less CPU resources. You can benchmark the deadline scheduler which MAY increase performance in some circumstances. By real world benchmarks, I mean anything but hdparm. —This unsigned comment is by Tama00 (talk) 22:38, 21 December 2011‎. Please sign your posts with ~~~~!

Interesting assertion... do you have any data or a source to back it up?
Graysky 17:20, 21 December 2011 (EST)
It seems that the cfq scheduler already knows what to do when SSD is detected, so there is no use to change it.
raymondcal 2012, may 29
CFQ has some optimizations for SSDs and if it detects a non-rotational media which can support higher queue depth (multiple requests at in flight at a time), then it cuts down on idling of individual queues and all the queues move to sync-noidle tree and only tree idle remains. This tree idling provides isolation with buffered write queues on async tree.
https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt
ushi 2013, November 03
So does anyone have any good data? My research says deadline is best and that "cfq has some optimizations" doesn't mean it's better than others.
MindfulMonk (talk) 22:38, 5 July 2014 (UTC)Reply[reply]

TRIM and RAID

The wiki article within the warning in the section Enable continuous TRIM by mount flag says: A possible patch has been posted on July 19, 2015."

The quoted article seems to be saying that there is a serious kernel bug which impacts when SSD's are being used with linux software raid.

Is that a confirmed kernel bug? If it is then shouldnt the wiki point this out? —This unsigned comment is by Sja1440 (talk) 27 September 2015 08:26. Please sign your posts with ~~~~!

The bug was manifested for particular brand SSD bios only, which were blacklisted. Since TRIM is a standard and other brands work fine, this issue was not regarded a kernel bug to my knowledge. Nonetheless, they may (have) merge(d) the patch to work-around the bios bugs. If someone has a related bug report where it is tracked or kernel commit, it would be useful to add, I agree. -Indigo (talk) 09:23, 27 September 2015 (UTC)Reply[reply]
I found this Slashdot discussion (linked with [1]) where there's a link to a kernel commit, although they're talking of a bug in the firmware too, and in fact the devices seem to be still blacklisted. — Kynikos (talk) 02:26, 28 September 2015 (UTC)Reply[reply]
Samsung's patch about data corruption are now merged, but full blacklist still here in Linux 4.5 https://github.com/torvalds/linux/blob/v4.5/drivers/ata/libata-core.c#L4223 for a lot of SDD brand, in particular all the popular Samsung 8 series (840|850 EVO|PRO). I still cannot find any information about the source issue and when it will be whitelisted. The article should warn user that all those SSD models should be avoided until a solution. Note that also --Nomorsad (talk) 11:03, 26 March 2016 (UTC)Reply[reply]
The article does warn; I have updated with your 4.5 link, thanks for follow-up. They added one more drive model to the blacklist since then.[2] --Indigo (talk) 11:19, 26 March 2016 (UTC)Reply[reply]

Remove the section on continuous trim

The section on continuous trim should be removed and a warning added to the trim timer section. There are about a million reasons why people shouldn't use continuous trim and unless there is a very compelling reason for someone to use it that I don't know of, that information doesn't seem useful enough to stick around and could cause a lot of harm.

Meskarune (talk) 13:30, 29 July 2016 (UTC)Reply[reply]

Lack of information can cause a lot of harm as well, so I think we'd better keep the section. -- Lahwaacz (talk) 13:50, 29 July 2016 (UTC)Reply[reply]
I've trimmed the warning a little, so now I think the only reason against is performance related, although the given link for Theodor Ts'o's opinion doesn't work anymore. -- Lahwaacz (talk) 16:42, 29 July 2016 (UTC)Reply[reply]
The link does not work, because gmane is unavailable at current (it is likely someone else will resume its web service so that links will return to availabilty).
I agree regarding continuous trim, it is important info. (I don't think we need the singular quote from the Ted T'so's rm'ed link, it dates).
Also, one may argue, since the identification and blacklisting of unreliable devices (some are explicitly whitelisted for certain trim methods too) the reasons to mount with discard have grown. Another reason to have continuous trim enabled devices: Imagine your device runs at about ~66% capacity occupation. Since the drives don't hold a table what's trimmed in last run, this means that each timed fstrim runs over a third of the drive, again and again. With a discard mount flag, any freed up space is only trimmed once. Now how much wear-levelling difference this means, depends on usage pattern. The more static the data on the drive (e.g. a mailing list archive like gmane), the more proportionate wear from fstrim. Even if wear-levelling is not an issue with the device, letting it perform an action once instead of redundant times is a matter of efficiency. If you find this a compelling reason, I don't know. --Indigo (talk) 07:13, 30 July 2016 (UTC)Reply[reply]

fstrim, btrfs and other fs

IMO, the table with trim support should note that running fstrim on btrfs produces different output than on other filesystems. If I run fstrim multiple times in a row for example on ext4, the first time I get "freed X bytes", and on the consequent runs I get "freed 0 bytes" and the subsequent runs are done immediately. On btrfs, however, I always get the same number and it takes the same time as the first run. I didn't tried on other filesystems than these two, but it can be confusing and suggest that there is some misconfiguration for users who are not aware of this. --Zopper (talk) 12:17, 1 Sep 2016 (UTC)

Periodic trim "Enabling the timer will activate the service weekly."

So if I want a trim once a week, I don't enable the service but just the timer? And do I understand correctly that if I enable the service, I get a trim every reboot and if I enable both the service and the timer, I get a trim every reboot and every week? Raygun (talk) 09:37, 25 May 2017 (UTC)Reply[reply]

fstrim.service does not contain the [Install] section, so it cannot be enabled. -- Lahwaacz (talk) 15:48, 25 May 2017 (UTC)Reply[reply]

fstrim on non-ssd, e.g. QEMU

I'm not sure if this fit's into this articel, or better another, but I think it should be mentioned, that the trim could be useful on non-ssd disks, e.g. on QEMU disks with harddrives. This is of course not the same trim like on ssd, but it't the same intention: inform the storage system about unused blocks, e.g. to avoid defragmenation, shrink the image. Example from an KVM system with QEMU disks, labeled not as SSD but with SAS storage on the product page:

$ lsblk --discard       
NAME    DISC-ALN DISC-GRAN DISC-MAX DISC-ZERO
sda            0        4K       1G         0
├─sda2         0        4K       1G         0
$  sudo fstrim -v -a                                         
/: 390,8 MiB (409780224 bytes) trimmed on /dev/sda2     
scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0-part2 -> ../../sda2

Ua4000 (talk) 18:00, 25 January 2021 (UTC)Reply[reply]

The links provided for SanDisk are broken

The SanDisk subsection under the Firmware section lists six links that are all broken: three for release notes and three for manually updating.

—This unsigned comment is by Pound Hash (talk) 00:33, 21 December 2021. Please sign your posts with ~~~~!