Difference between revisions of "Talk:Solid State Drives"

From ArchWiki
Jump to: navigation, search
(What about F2FS?: Response regarding not providing information about F2FS)
(Periodic trim "Enabling the timer will activate the service weekly.": re)
 
(82 intermediate revisions by 21 users not shown)
Line 1: Line 1:
== DONT USE NOOP ==
+
== Don't use noop ==
 +
 
 +
The noop scheduler will perform slow but as a result it will greatly frees up CPU cycles. This in the real world will not increase the speed of your read/writes compared to CFS but instead consume less CPU resources. You can benchmark the deadline scheduler which MAY increase performance in some circumstances. By real world benchmarks, I mean anything but hdparm. {{unsigned|22:38, 21 December 2011‎|Tama00}}
  
The noop scheduler will perform slow but as a result it will greatly frees up CPU cycles. This in the real world will not increase the speed of your read/writes compared to CFS but instead consume less CPU resources. You can benchmark the deadline scheduler which MAY increase performance in some circumstances. By real world benchmarks, I mean anything but hdparm.
 
 
:Interesting assertion... do you have any data or a source to back it up?
 
:Interesting assertion... do you have any data or a source to back it up?
 
:[[User:Graysky|Graysky]] 17:20, 21 December 2011 (EST)
 
:[[User:Graysky|Graysky]] 17:20, 21 December 2011 (EST)
Line 8: Line 9:
 
::[[User:raymondcal|raymondcal]] 2012, may 29
 
::[[User:raymondcal|raymondcal]] 2012, may 29
  
:::''CFQ has some optimizations for SSDs and if it detects a non-rotational
+
:::''CFQ has some optimizations for SSDs and if it detects a non-rotational media which can support higher queue depth (multiple requests at in flight at a time), then it cuts down on idling of individual queues and all the queues move to sync-noidle tree and only tree idle remains. This tree idling provides isolation with buffered write queues on async tree.''
:::media which can support higher queue depth (multiple requests at in
 
:::flight at a time), then it cuts down on idling of individual queues and
 
:::all the queues move to sync-noidle tree and only tree idle remains. This
 
:::tree idling provides isolation with buffered write queues on async tree.''
 
 
:::https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt
 
:::https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt
 
:::[[User:ushi|ushi]] 2013, November 03
 
:::[[User:ushi|ushi]] 2013, November 03
  
== Alignment ==
+
::::So does anyone have any good data? My research says deadline is best and that "cfq has some optimizations" doesn't mean it's better than others.
 +
::::[[User:MindfulMonk|MindfulMonk]] ([[User talk:MindfulMonk|talk]]) 22:38, 5 July 2014 (UTC)
 +
 
 +
== TRIM and RAID ==
 +
 
 +
The wiki article within the warning in the section '''Enable continuous TRIM by mount flag''' says: A possible [http://www.spinics.net/lists/raid/msg49440.html patch] has been posted on July 19, 2015.''"
 +
 
 +
The quoted spinics article seems to be saying that there is a serious kernel bug which impacts when SSD's are being used with linux software raid.
 +
 
 +
Is that a confirmed kernel bug? If it is then shouldnt the wiki point this out?
 +
{{unsigned|27 September 2015 08:26|Sja1440}}
 +
 
 +
:The bug was manifested for particular brand SSD bios only, which were blacklisted. Since TRIM is a standard and other brands work fine, this issue was not regarded a kernel bug to my knowledge. Nonetheless, they may (have) merge(d) the patch to work-around the bios bugs. If someone has a related bug report where it is tracked or kernel commit, it would be useful to add, I agree. -[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 09:23, 27 September 2015 (UTC)
 +
 
 +
::I found [http://linux.slashdot.org/story/15/07/30/1814200/samsung-finds-fixes-bug-in-linux-trim-code this] Slashdot discussion (linked with [https://wiki.archlinux.org/index.php?title=Solid_State_Drives&type=revision&diff=402057&oldid=400007]) where there's a link to a kernel commit, although they're talking of a bug in the firmware too, and in fact the devices seem to be still [https://github.com/torvalds/linux/blob/master/drivers/ata/libata-core.c#L4220 blacklisted]. — [[User:Kynikos|Kynikos]] ([[User talk:Kynikos|talk]]) 02:26, 28 September 2015 (UTC)
 +
 
 +
: Samsung's patch about data corruption are now merged, but full blacklist still here in Linux 4.5 https://github.com/torvalds/linux/blob/v4.5/drivers/ata/libata-core.c#L4223 for a lot of SDD brand, in particular all the popular Samsung 8 series (840|850 EVO|PRO). I still cannot find any information about the source issue and when it will be whitelisted. The article should warn user that all those SSD models should be avoided until a solution. Note that also --[[User:Nomorsad|Nomorsad]] ([[User talk:Nomorsad|talk]]) 11:03, 26 March 2016 (UTC)
 +
 
 +
:: The article does warn; I have updated with your 4.5 link, thanks for follow-up. They added one more drive model to the blacklist since then.[https://wiki.archlinux.org/index.php?title=Solid_State_Drives&type=revision&diff=427892&oldid=424520] --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 11:19, 26 March 2016 (UTC)
 +
 
 +
== Using discard option to mount root directory on xfs file system is no use ==
 +
 
 +
After I modified /etc/fstab and add the discard option to the / entry, I reboot my laptop. But when I use mount command to check the options about the file system how to be mounted, there is no discard option in the / entry. and the other directory such as /home and /boot are mounted with discard option correctly.
 +
I have tried to remount root directory with discard option, but there is no use.
 +
 
 +
{{unsigned|03:03, 27 May 2016‎|Cfunc}}
 +
 
 +
:Did this perhaps solve itself after the next initramfs generation? --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 10:22, 4 August 2016 (UTC)
 +
 
 +
== Remove the section on continuous trim ==
 +
 
 +
The section on continuous trim should be removed and a warning added to the trim timer section. There are about a million reasons why people shouldn't use continuous trim and unless there is a very compelling reason for someone to use it that I don't know of, that information doesn't seem useful enough to stick around and could cause a lot of harm.
 +
 
 +
[[User:Meskarune|Meskarune]] ([[User talk:Meskarune|talk]]) 13:30, 29 July 2016 (UTC)
 +
 
 +
:Lack of information can cause a lot of harm as well, so I think we'd better keep the section. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 13:50, 29 July 2016 (UTC)
 +
:I've [https://wiki.archlinux.org/index.php?title=Solid_State_Drives&type=revision&diff=443642&oldid=443586 trimmed] the warning a little, so now I think the only reason against is performance related, although the given link for Theodor Ts'o's opinion doesn't work anymore. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 16:42, 29 July 2016 (UTC)
 +
 
 +
::The link does not work, because [https://lars.ingebrigtsen.no/2016/07/28/the-end-of-gmane/ gmane] is unavailable at current (it is likely someone else will resume its web service so that links will return to availabilty). 
 +
::I agree regarding continuous trim, it is important info. (I don't think we need the singular quote from the Ted T'so's rm'ed link, it dates).
 +
::Also, one may argue, since the identification and blacklisting of unreliable devices (some are explicitly whitelisted for certain trim methods too) the reasons to mount with discard have grown. Another reason to have continuous trim enabled devices: Imagine your device runs at about ~66% capacity occupation. Since the drives don't hold a table what's trimmed in last run, this means that each timed fstrim runs over a third of the drive, again and again. With a discard mount flag, any freed up space is only trimmed once. Now how much wear-levelling difference this means, depends on usage pattern. The more static the data on the drive (e.g. a mailing list archive like gmane), the more proportionate wear from fstrim. Even if wear-levelling is not an issue with the device, letting it perform an action once instead of redundant times is a matter of efficiency. If you find this a compelling reason, I don't know. --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 07:13, 30 July 2016 (UTC)
 +
 
 +
== fstrim, btrfs and other fs ==
 +
IMO, the table with trim support should note that running fstrim on btrfs produces different output than on other filesystems. If I run fstrim multiple times in a row for example on ext4, the first time I get "freed X bytes", and on the consequent runs I get "freed 0 bytes" and the subsequent runs are done immediately. On btrfs, however, I always get the same number and it takes the same time as the first run. I didn't tried on other filesystems than these two, but it can be confusing and suggest that there is some misconfiguration for users who are not aware of this.  --[[User:Zopper|Zopper]] ([[User talk:Zopper|talk]]) 12:17, 1 Sep 2016 (UTC)
  
The information about alignment is missing
+
== <s>atime: lazytime instead of noatime</s> ==
[[User:Juen|Juen]] ([[User talk:Juen|talk]]) 06:28, 20 January 2013 (UTC)
 
  
 +
Since linux kernel 4.0 there is option <code>lazytime</code> (see https://wiki.archlinux.org/index.php/Fstab#atime_options). Shouldn't it be the default suggestion for SSD here? (in this page, we put <code>noatime</code> in the fstab example) --[[User:Apaan|Apaan]] ([[User talk:Apaan|talk]]) 22:29, 9 May 2017 (UTC)
  
Both {f,g}disk handle alignment automatically. Why introduce erroneous info to the already bloated article?
+
:The example did not suggest anything regarding the {{ic|*atime}} options, so I've removed it. Note that {{ic|lazytime}} is not a replacement for {{ic|noatime}}, it works in combination with the other {{ic|*atime}} options. Closing. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 06:15, 10 May 2017 (UTC)
[[User:Graysky|Graysky]] ([[User talk:Graysky|talk]]) 13:48, 20 January 2013 (UTC)
 
  
== What about F2FS? ==
+
== Periodic trim "Enabling the timer will activate the service weekly." ==
  
In the [https://wiki.archlinux.org/index.php/Solid_State_Drives#Choice_of_Filesystem Choice of Filesystem] section, isn't it time to include some information about [http://en.wikipedia.org/wiki/F2FS F2FS] since [http://hothardware.com/News/Linux-Kernel-38-Released-Includes-F2Fs-Files-System-for-Solid-State-Storage/ Linux Kernel 3.8 Includes F2FS File System for Solid State Storage]
+
So if I want a trim once a week, I don't enable the service but just the timer? And do I understand correctly that if I enable the service, I get a trim every reboot and if I enable both the service and the timer, I get a trim every reboot and every week? [[User:Raygun|Raygun]] ([[User talk:Raygun|talk]]) 09:37, 25 May 2017 (UTC)
  
:Perhaps, but after examining this [http://www.phoronix.com/scan.php?page=article&item=linux_f2fs_benchmarks&num=1 performance comparison by Phoronix], you have to ask if the (slight) performance advantage of F2FS outweighs the stability and support of ext4.
+
:{{ic|fstrim.service}} does not contain the {{ic|[Install]}} section, so it cannot be enabled. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 15:48, 25 May 2017 (UTC)
:Might the article become more bloated and confusing for little or no real advantage ?  [[User:Kal|Kal]] ([[User talk:Kal|talk]]) 17:12, 19 August 2013 (UTC)
 
::See [[ArchWiki:About#Comprehensive]]. Information relevant to Arch Linux should be provided so that the end user to make the decision. As I understand it, Arch Wiki is meant to be ''descriptive'' not ''prescriptive.'' -- [[User:AdamT|AdamT]] ([[User_talk:AdamT|Talk]]) 20:26, 17 November 2013 (UTC)
 

Latest revision as of 15:48, 25 May 2017

Don't use noop

The noop scheduler will perform slow but as a result it will greatly frees up CPU cycles. This in the real world will not increase the speed of your read/writes compared to CFS but instead consume less CPU resources. You can benchmark the deadline scheduler which MAY increase performance in some circumstances. By real world benchmarks, I mean anything but hdparm. —This unsigned comment is by Tama00 (talk) 22:38, 21 December 2011‎. Please sign your posts with ~~~~!

Interesting assertion... do you have any data or a source to back it up?
Graysky 17:20, 21 December 2011 (EST)
It seems that the cfq scheduler already knows what to do when SSD is detected, so there is no use to change it.
raymondcal 2012, may 29
CFQ has some optimizations for SSDs and if it detects a non-rotational media which can support higher queue depth (multiple requests at in flight at a time), then it cuts down on idling of individual queues and all the queues move to sync-noidle tree and only tree idle remains. This tree idling provides isolation with buffered write queues on async tree.
https://www.kernel.org/doc/Documentation/block/cfq-iosched.txt
ushi 2013, November 03
So does anyone have any good data? My research says deadline is best and that "cfq has some optimizations" doesn't mean it's better than others.
MindfulMonk (talk) 22:38, 5 July 2014 (UTC)

TRIM and RAID

The wiki article within the warning in the section Enable continuous TRIM by mount flag says: A possible patch has been posted on July 19, 2015."

The quoted spinics article seems to be saying that there is a serious kernel bug which impacts when SSD's are being used with linux software raid.

Is that a confirmed kernel bug? If it is then shouldnt the wiki point this out? —This unsigned comment is by Sja1440 (talk) 27 September 2015 08:26. Please sign your posts with ~~~~!

The bug was manifested for particular brand SSD bios only, which were blacklisted. Since TRIM is a standard and other brands work fine, this issue was not regarded a kernel bug to my knowledge. Nonetheless, they may (have) merge(d) the patch to work-around the bios bugs. If someone has a related bug report where it is tracked or kernel commit, it would be useful to add, I agree. -Indigo (talk) 09:23, 27 September 2015 (UTC)
I found this Slashdot discussion (linked with [1]) where there's a link to a kernel commit, although they're talking of a bug in the firmware too, and in fact the devices seem to be still blacklisted. — Kynikos (talk) 02:26, 28 September 2015 (UTC)
Samsung's patch about data corruption are now merged, but full blacklist still here in Linux 4.5 https://github.com/torvalds/linux/blob/v4.5/drivers/ata/libata-core.c#L4223 for a lot of SDD brand, in particular all the popular Samsung 8 series (840|850 EVO|PRO). I still cannot find any information about the source issue and when it will be whitelisted. The article should warn user that all those SSD models should be avoided until a solution. Note that also --Nomorsad (talk) 11:03, 26 March 2016 (UTC)
The article does warn; I have updated with your 4.5 link, thanks for follow-up. They added one more drive model to the blacklist since then.[2] --Indigo (talk) 11:19, 26 March 2016 (UTC)

Using discard option to mount root directory on xfs file system is no use

After I modified /etc/fstab and add the discard option to the / entry, I reboot my laptop. But when I use mount command to check the options about the file system how to be mounted, there is no discard option in the / entry. and the other directory such as /home and /boot are mounted with discard option correctly. I have tried to remount root directory with discard option, but there is no use.

—This unsigned comment is by Cfunc (talk) 03:03, 27 May 2016‎. Please sign your posts with ~~~~!

Did this perhaps solve itself after the next initramfs generation? --Indigo (talk) 10:22, 4 August 2016 (UTC)

Remove the section on continuous trim

The section on continuous trim should be removed and a warning added to the trim timer section. There are about a million reasons why people shouldn't use continuous trim and unless there is a very compelling reason for someone to use it that I don't know of, that information doesn't seem useful enough to stick around and could cause a lot of harm.

Meskarune (talk) 13:30, 29 July 2016 (UTC)

Lack of information can cause a lot of harm as well, so I think we'd better keep the section. -- Lahwaacz (talk) 13:50, 29 July 2016 (UTC)
I've trimmed the warning a little, so now I think the only reason against is performance related, although the given link for Theodor Ts'o's opinion doesn't work anymore. -- Lahwaacz (talk) 16:42, 29 July 2016 (UTC)
The link does not work, because gmane is unavailable at current (it is likely someone else will resume its web service so that links will return to availabilty).
I agree regarding continuous trim, it is important info. (I don't think we need the singular quote from the Ted T'so's rm'ed link, it dates).
Also, one may argue, since the identification and blacklisting of unreliable devices (some are explicitly whitelisted for certain trim methods too) the reasons to mount with discard have grown. Another reason to have continuous trim enabled devices: Imagine your device runs at about ~66% capacity occupation. Since the drives don't hold a table what's trimmed in last run, this means that each timed fstrim runs over a third of the drive, again and again. With a discard mount flag, any freed up space is only trimmed once. Now how much wear-levelling difference this means, depends on usage pattern. The more static the data on the drive (e.g. a mailing list archive like gmane), the more proportionate wear from fstrim. Even if wear-levelling is not an issue with the device, letting it perform an action once instead of redundant times is a matter of efficiency. If you find this a compelling reason, I don't know. --Indigo (talk) 07:13, 30 July 2016 (UTC)

fstrim, btrfs and other fs

IMO, the table with trim support should note that running fstrim on btrfs produces different output than on other filesystems. If I run fstrim multiple times in a row for example on ext4, the first time I get "freed X bytes", and on the consequent runs I get "freed 0 bytes" and the subsequent runs are done immediately. On btrfs, however, I always get the same number and it takes the same time as the first run. I didn't tried on other filesystems than these two, but it can be confusing and suggest that there is some misconfiguration for users who are not aware of this. --Zopper (talk) 12:17, 1 Sep 2016 (UTC)

atime: lazytime instead of noatime

Since linux kernel 4.0 there is option lazytime (see https://wiki.archlinux.org/index.php/Fstab#atime_options). Shouldn't it be the default suggestion for SSD here? (in this page, we put noatime in the fstab example) --Apaan (talk) 22:29, 9 May 2017 (UTC)

The example did not suggest anything regarding the *atime options, so I've removed it. Note that lazytime is not a replacement for noatime, it works in combination with the other *atime options. Closing. -- Lahwaacz (talk) 06:15, 10 May 2017 (UTC)

Periodic trim "Enabling the timer will activate the service weekly."

So if I want a trim once a week, I don't enable the service but just the timer? And do I understand correctly that if I enable the service, I get a trim every reboot and if I enable both the service and the timer, I get a trim every reboot and every week? Raygun (talk) 09:37, 25 May 2017 (UTC)

fstrim.service does not contain the [Install] section, so it cannot be enabled. -- Lahwaacz (talk) 15:48, 25 May 2017 (UTC)