SSD

From ArchWiki
Revision as of 18:34, 27 June 2010 by Graysky (Talk | contribs) (File Systems)

Jump to: navigation, search

Solid State Drives - Best Practices

Note: Readers are encouraged to contribute to enhance the quality of this article.

As most Archers know, Solid State Drives (SSDs) are not PnP devices. Special considerations such as partition alignment, choice of file system, TRIM support, etc. are needed to setup SSDs for optimal performance. This article attempts to capture referenced, key learnings to enable users to get the most out of SSDs under Arch (Linux in general).

Pre-Purchase Considerations

There are several key features to look for prior to purchasing a contemporary SSD.

Key Features

  • Native TRIM support is a vital feature that both prolongs SSD lifetime and reduces the loss of performance over time.
  • Buying the right size SSD is also key. It goes without saying that purchasing the right about of capacity is important. Like all file systems, target <75 % occupancy for all SSD partitions to ensure efficient use by Linux.

On-line Reviews

This section is not meant to be all-inclusive, but does capture some key reviews.

Partitions

System Partition Scheme

An overarching theme for SSD usage 'simplicity' in terms of locating high-read/write partitions on a physical HDD rather than on an SSD. Doing so will add life to an SSD. For example, consider relocating the /var partition to a physical disc on the system rather than on the SSD itself to avoid read/write wear. Many users elect to keep only /boot, /, and /home on the SSD locating /var and /tmp on a physical HDD - or better yet, into Random Access Memory (RAM) provided the system has enough to spare. See the next section for more on this procedure.

Locate /tmp in RAM

For systems with >=4 gigs of memory, locating /tmp in the RAM is desirable and easily achieved by first clearing the physical /tmp partition and then mounting it to tmpfs (RAM) in the Template:Filename. The following line gives an example:

none	/tmp	tmpfs	nodev,nosuid,nodiratime,noatime,size=1000M,mode=1777	0	0

Locate Browser Profiles in RAM

For the same reason outlined above, one can easily mount one's firefox profile (and others such as chromium, etc.) into RAM via tmpfs. For more on this procedure, see the Speed-up Firefox Using tmpfs article. In addition to the obvious speed enhancements, users will also save read/write cycles on their SSD by doing so.

Physical Partitioning and Alignment

Proper partition alignment is key for optimal performance and longevity. The community seems to be in agreement on the use of fdisk as the utility of choice for partitioning SSDs (although one can also find guides whose authors advocate using parted). Consensus opinion on the best settings for number of heads and cylinders is tough to find. There seem to be two different camps on this issue as shown below. The key is that one needs to align the partitions based on the SSDs EBS (erase block size). The Intel X25-M for example uses an EBS of 512 KiB.

Ted Tso recommends using a setting of 224/56 for SSDs with an EBS of 512 KiB:

# fdisk -H 224 -S 56 /dev/sdX

While others advocate a setting of 32/32 for SSDs with an EBS of 512 KiB:

# fdisk -H 32 -S 32 /dev/sdX

Additional Reading

File Systems

Many options exist for file systems including ext2, ext3, ext4, XFS, and btrfs. Initially, ext2 was thought to be a good choice as it lacks journaling which would avoid extraneous read/write cycles. Ext4 can also be used without a journal and is thought to be superior to ext2 in a number of areas. The obvious drawback of using a non-journaling file system is data loss as a result of an ungraceful dismount (i.e. post power failure). With modern SSDs, Ted Tso advocates that journaling can be enabled with minimal extraneous read/write cycles under most circumstances:

Amount of data written (in megabytes) on an ext4 file system mounted with noatime.

operation journal w/o journal percent change
git clone 367.0 353.0 3.81 %
make 207.6 199.4 3.95 %
make clean 6.45 3.73 42.17 %

"What the results show is that metadata-heavy workloads, such as make clean, do result in almost twice the amount data written to disk. This is to be expected, since all changes to metadata blocks are first written to the journal and the journal transaction committed before the metadata is written to their final location on disk. However, for more common workloads where we are writing data as well as modifying filesystem metadata blocks, the difference is much smaller: 4% for the git clone, and 12% for the actual kernel compile."

Btrfs support has been included with the mainline 2.6.29 release of the Linux kernel. Some feel that it is not mature enough for production use while there are also early adopters of this potential successor to ext4. It should be noted that at the time this article was written (27-June-2010), a stable version of btrfs does not exist. See this blog entry for more on btrfs.

Mount Flags in /etc/fstab

There are several key mount flags to use in one's Template:Filename entries for SSD partitions.

  • noatime - Reading accesses to the file system will no longer result in an update to the atime information associated with the file. The importance of the noatime setting is that it eliminates the need by the system to make writes to the file system for files which are simply being read. Since writes can be somewhat expensive, this can result in measurable performance gains. Note that the write time information to a file will continue to be updated anytime the file is written to with this option enabled.
  • discard - The discard flag will enable the benefits of the TRIM command so long as one is using kernel version >=2.6.33.
/dev/sda1 / ext4 defaults,noatime,discard 0 1
Warning: Users need to be certain that kernel version 2.6.33 or above is being used AND that their SSD supports TRIM before attempting to mount a partition with the discard flag. Data loss can occur!

I/O Scheduler

Consider switching from the cfq scheduler to the noop or deadline scheduler. Using the noop scheduler for example simplifies requests in the order they are received, without giving any consideration to where the data physically resides on the disk. This option is thought to be advantageous SSDs since seek times are identical for all sectors on the SSD. For more on schedulers, see this Linux-mag article.

The cfq scheduler is enabled by default on Arch. Verify this by viewing the contents /sys/block/sda/queue/scheduler:

$ cat /sys/block/sda/queue/scheduler
noop deadline [cfq]

The scheduler currently in use is denoted from the available schedulers by the brackets. To switch to the noop scheduler, one can add the following line in Template:Filename:

# echo noop > /sys/block/sda/queue/scheduler

Swap Space on SSDs

One can place a swap partition on an SSD. Note that most modern desktops with an excess of 2 Gigs of memory rarely use swap at all. The notable exception is systems which make use of the hibernate feature. The following is recommended tweak for SSDs using a swap partition that will reduce the "swapiness" of the system thus avoiding writes to swap.

# echo 1 > /proc/sys/vm/swapiness

Or one can simply modify Template:Filename as recommended in the Maximizing Performance wiki article.

vm.swappiness=20
vm.vfs_cache_pressure=50