Difference between revisions of "Benchmarking/Data storage devices"

From ArchWiki
Jump to: navigation, search
(Results: we can refer to phoronix for poorly done benchmarks)
(Continued alad's cleanup, as these are not reproducible and of no real value.)
Line 67: Line 67:
 
Finally, delete the temp file
 
Finally, delete the temp file
 
  $ rm tempfile
 
  $ rm tempfile
 +
 
==== Caveats ====
 
==== Caveats ====
 +
 
Some SSD controllers have compression hardware, which may skew benchmark results. See http://www.pugetsystems.com/labs/articles/SSDs-Advertised-vs-Actual-Performance-179/
 
Some SSD controllers have compression hardware, which may skew benchmark results. See http://www.pugetsystems.com/labs/articles/SSDs-Advertised-vs-Actual-Performance-179/
=== Model Specific Data ===
 
Please contribute to this section using the template below to post results obtained.
 
 
See [http://www.anandtech.com/bench/SSD/65 here] for a nice database of benchmarks.
 
 
=== Template ===
 
*SSD:
 
*Model Number:
 
*Firmware Version:
 
*Capacity: x GB
 
*Controller:
 
*User:
 
*Kernel:
 
[*Filesystem: write something about your FS, optional]
 
[*Notes: additional Notes, optional]
 
 
# hdparm -Tt /dev/sdx
 
 
Minimum Read Rate: x MB/s
 
Maximum Read Rate: x MB/s
 
Average Read Rate: x MS/s
 
Average Access Time x ms
 
 
$ dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
# echo 3 > /proc/sys/vm/drop_caches
 
$ dd if=tempfile of=/dev/null bs=1M count=1024
 
$ dd if=tempfile of=/dev/null bs=1M count=1024
 
 
= Encrypted Partitions =
 
 
This section should show some data for encrypted partitions.
 
 
== dm-crypt with AES ==
 
 
Please list your CPU and if you are using AES-NI. Without AES-NI support in the CPU, the processor will be the bottleneck long before you touch the >500MB/s SSD speeds.
 
 
'''i7-620M Benchmark'''
 
*~570 MB/s :With AES-NI
 
*~100 MB/s :Without AES-NI
 
 
'''i5-3210M'''
 
*~500 MB/s :With AES-NI
 
*~200 MB/s :Without AES-NI
 
 
=== Crucial ===
 
 
The crucial drive does not use any compression to reach its speeds, so it is expected to be fast.
 
 
==== Crucial M4 256 Gb ====
 
 
* User: crobe
 
* Filesystem: ext4 on dm-crypt
 
* Running Sata 6 Gbit/s on an older 3 Gbit/s controller
 
* Comment: The drive is faster on writing ( on fresh space ), which has been observed on the internet. Maybe this is the maximum of my machine.
 
 
# cryptsetup status
 
type:    LUKS1
 
cipher:  aes-xts-plain
 
keysize: 256 bits
 
 
# hdparm -Tt /dev/sda
 
/dev/sda:
 
Timing cached reads:  3012 MB in  2.00 seconds = 1507.62 MB/sec
 
Timing buffered disk reads: 558 MB in  3.00 seconds = 185.93 MB/sec
 
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
1024+0 Datensätze ein
 
1024+0 Datensätze aus
 
1073741824 Bytes (1,1 GB) kopiert, 7,86539 s, 137 MB/s
 
# echo 3 > /proc/sys/vm/drop_caches
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 Datensätze ein
 
1024+0 Datensätze aus
 
1073741824 Bytes (1,1 GB) kopiert, 9,78325 s, 110 MB/s
 
 
=== OCZ ===
 
 
The OCZ Drives use compression on Data, so with incompressible encrypted data, speeds are expected to be way lower. Still, seek times should be as low as ever and the drive should not get slower when it gets full, so there should be enough speed.
 
 
==== OCZ-VERTEX2 180GB ====
 
 
* SSD: OCZ
 
* Model Number: Vertex2
 
* Capacity: 180Gb
 
* User: crobe
 
* Filesystem: ext4 on dm-crypt with AES, essiv, sha256
 
* The bottleneck for the read/write speeds is definately the drive
 
 
# hdparm -Tt /dev/sda
 
/dev/sda:
 
Timing cached reads:  2842 MB in  2.00 seconds = 1422.61 MB/sec
 
Timing buffered disk reads: 550 MB in  3.00 seconds = 183.26 MB/sec
 
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
1024+0 Datensätze ein
 
1024+0 Datensätze aus
 
1073741824 Bytes (1,1 GB) kopiert, 16,9194 s, 63,5 MB/s
 
# echo 3 > /proc/sys/vm/drop_caches
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 Datensätze ein
 
1024+0 Datensätze aus
 
1073741824 Bytes (1,1 GB) kopiert, 14,5509 s, 73,8 MB/s
 
 
Same values for bonnie++.
 
 
=== Samsung ===
 
 
==== SAMSUNG 470 128GB  ====
 
 
*SSD: SAMSUNG 470 128GB
 
*Firmware: AXM09B1Q
 
*Capacity: 128 GB
 
*User: FredericChopin
 
 
# cryptsetup status
 
type:    LUKS1
 
cipher:  aes-xts-plain
 
keysize: 512 bits
 
offset:  8192 sectors
 
 
# hdparm -Tt /dev/sda
 
/dev/sda:
 
Timing cached reads:  3226 MB in  2.00 seconds = 1614.42 MB/sec
 
Timing buffered disk reads: 570 MB in  3.00 seconds = 189.84 MB/sec
 
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 9.62518 s, 112 MB/s
 
# echo 3 > /proc/sys/vm/drop_caches
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 9.34282 s, 115 MB/s
 
 
==== SAMSUNG 830 256GB ====
 
 
*SSD: Samsung 830 256GB
 
*Model Number: MZ-7PC256B/WW
 
*Firmware Version: CXM03BQ1
 
*Capacity: 256 GB
 
*User: stefseel
 
*Kernel: 3.4.6-1-ARCH (with aesni_intel module)
 
*Filesystem: ext4 (relatime,discard) over LVM2 over dm-crypt/LUKS (allow-discards)
 
*System: Lenovo ThinkPad T430 (i5-3210M)
 
 
# hdparm -Tt /dev/sda
 
/dev/sda:
 
Timing cached reads:  15000 MB in  2.00 seconds = 7500 MB/sec
 
Timing buffered disk reads: 1470 MB in  3.00 seconds = 490 MB/sec
 
 
With default Arch settings with installed pm-utils: JOURNAL_COMMIT_TIME_AC=0, DRIVE_READAHEAD_AC=256
 
 
# dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.62668 s, 300 MB/s
 
 
# echo 3 > /proc/sys/vm/drop_caches
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 4.07337 s, 170 MB/s
 
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 0.154298 s, 7.0 GB/s
 
 
What annoyed me was the poor read performance. I observed that in battery mode with unplugged AC the read rate was 500 MB/s. I did some research and found out that pm-utils is to blame. In AC mode it sets journal commit time to zero and readahead to 256 whereas in battery mode it sets journal commit time to 600 and readahead to 3072. See scripts /usr/lib/pm-utils/power.d/journal-commit and /usr/lib/pm-utils/power.d/readahead. So I added a custom config to set journal commit time always to 600 and readahead always to 4096, the result made me happy :)
 
 
# cat /etc/pm/config.d/config
 
DRIVE_READAHEAD_AC=4096
 
DRIVE_READAHEAD_BAT=4096
 
JOURNAL_COMMIT_TIME_AC=600
 
JOURNAL_COMMIT_TIME_BAT=600
 
 
# echo 3 > /proc/sys/vm/drop_caches
 
# dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 2.15534 s, 500 MB/s
 
 
However there is still an issue: after resuming from suspend read rate goes down to 270 MB/s.
 
 
 
==== SAMSUNG 830 256GB  ====
 
*[[User: hunterthompson]]
 
*SSD: SAMSUNG 830 256GB
 
*Firmware: CXM03B1Q
 
*Capacity: 256 GB
 
*System: Thinkpad X230, 16GB PC-1600 CL9 Kingston HyperX
 
*CPU: i7-3520M, AES-NI, Hyper-Threaded, 2.9GHz-3.6GHz, Steady 3.4GHz with all 4 threads 100%
 
*Kernel: x86_64 linux-grsec 3.5.4-1-grsec (Desktop, Virt, Host, KVM, Security)
 
*Encryption: Full Disk, LVM2 on LUKS dm-crypt, Allow-Discards
 
*Cryptsetup: -h sha512 -c aes-xts-plain64 -y -s 512 luksFormat --align-payload=8192
 
*Filesystem: mkfs.ext4 -b 4096 -E stride=128,stripe-width=128 /dev/mapper/VolGroup00-lvolhome
 
*fstab: ext4,rw,noatime,nodiratime,discard,stripe=128,data=ordered,errors=remount-ro
 
*Notes: SATAIII, partitions aligned and no swap
 
 
% dd bs=1M count=1024 if=7600_Retail_Ultimate_DVD.iso  of=/dev/null conv=fdatasync
 
dd: fsync failed for ‘/dev/null’: Invalid argument
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.42075 s, 314 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.48574 s, 308 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.45361 s, 311 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.44276 s, 312 MB/s
 
 
=== Plextor ===
 
==== Plextor M5M 128GB  ====
 
*[[User: ror191505]]
 
*SSD: Plextor M5M 128GB
 
*Firmware: 1.04
 
*Capacity: 128 GB
 
*System: ASUS K56CB, 8Gb RAM
 
*CPU: i7-3517U, AES-NI, Hyper-Threaded, 1.9GHz-3.0GHz
 
*Kernel: x86_64 3.13.5-1-ARCH
 
*Encryption: LVM2 on LUKS dm-crypt
 
*Cryptsetup: -h sha512 -c aes-xts-plain64 -y -s 512 luksFormat --align-payload=8192
 
*Filesystem: EXT4
 
 
# cryptsetup status /dev/mapper/crypted 
 
/dev/mapper/crypted is active and is in use.
 
  type:    LUKS1
 
  cipher:  aes-xts-plain64
 
  keysize: 256 bits
 
  device:  /dev/sdb2
 
  offset:  4096 sectors
 
  size:    249041517 sectors
 
  mode:    read/write
 
 
 
% dd bs=1M count=1024 if=film.mkv  of=/dev/null conv=fdatasync
 
dd: fsync failed for '/dev/null': Invalid argument
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 2.65242 s, 405 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.69877 s, 290 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.67513 s, 292 MB/s
 
 
% dd bs=1M count=1024 if=/dev/zero of=test conv=fdatasync
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 3.68793 s, 291 MB/s
 
 
% hdparm -Tt /dev/sdb
 
/dev/sdb:
 
Timing cached reads:  12774 MB in  2.00 seconds = 6394.84 MB/sec
 
Timing buffered disk reads: 1264 MB in  3.00 seconds = 420.68 MB/sec
 
 
== Truecrypt ==
 
 
= Comparison - high end SCSI RAID 0 hard drive benchmark =
 
== LSI 320-2X Megaraid SCSI ==
 
 
* SSD: N/A
 
* Model Number: LSI MegaRAID 320-2x RAID 0 x 2 Seagate Cheetah ST373455LC 15,000 RPM 146GB drives
 
* Capacity: 292Gb
 
* User: rabinnh
 
* Filesystem: ext4 on x86_64
 
* Comment: No, this is not an SSD, but Googlers should have a reasonable basis for comparison to a high end hard drive system, and you will not get much higher end for an individual workstation.  The cost of this disk subsystem is conservatively $760, and it gives at best half the performance of most SSDs.
 
 
<pre>$ hdparm -Tt /dev/sda2
 
/dev/sda2:
 
Timing cached reads:  6344 MB in  2.00 seconds = 3174.02 MB/sec
 
Timing buffered disk reads: 442 MB in  3.01 seconds = 146.97 MB/sec</pre>
 
 
<pre>$dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 7.13482 s, 150 MB/s
 
</pre>
 
<pre>
 
$echo 3 > /proc/sys/vm/drop_caches
 
$dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 7.24267 s, 148 MB/s</pre>
 
 
<pre>$dd if=tempfile of=/dev/null bs=1M count=1024
 
1024+0 records in
 
1024+0 records out
 
1073741824 bytes (1.1 GB) copied, 0.459814 s, 2.3 GB/s</pre>
 

Revision as of 14:18, 4 October 2015

This article covers several Linux-native apps that benchmark I/O devices such as HDDs, SSDs, USB thumb drives, etc. There is also a "database" section specific to SSDs meant to capture user-entered benchmark results.

Introduction

Several I/O benchmark options exist under Linux.

  • Using hdparm with the -Tt switch, one can time sequential reads. This method is independent of partition alignment!
  • There is a graphical benchmark called gnome-disks contained in the gnome-disk-utility package that will give min/max/ave reads along with ave access time and a nice graphical display. This method is independent of partition alignment!
  • The dd utility can be used to measure both reads and writes. This method is dependent on partition alignment! In other words, if you failed to properly align your partitions, this fact will be seen here since you are writing and reading to a mounted filesystem.
  • Bonnie++ (caution: by default, bonnie++ write at least twice the RAM size on disk. If you want to preserve your SSD, use non default option)

Using hdparm

# hdparm -Tt /dev/sdX
/dev/sdX:
Timing cached reads:   x MB in  y seconds = z MB/sec
Timing buffered disk reads:  x MB in  y seconds = z MB/sec
Note: One should run the above command 4-5 times and manually average the results for an accurate evaluation of read speed per the hdparm man page.

Using gnome-disks

# gnome-disks

Users will need to navigate through the GUI to the benchmark button ("More actions..." => "Benchmark Volume..."). Example

Using systemd-analyze

systemd-analyze plot > boot.svg

Will plot a detailed graphic with the boot sequence: kernel time, userspace time, time taken by each service. Example

Using dd

Note: This method requires the command to be executed from a mounted partition on the device of interest!

First, enter a directory on the SSD with at least 1.1 GB of free space (and one that obviously gives your user wrx permissions) and write a test file to measure write speeds and to give the device something to read:

$ cd /path/to/SSD
$ dd if=/dev/zero of=tempfile bs=1M count=1024 conv=fdatasync,notrunc
1024+0 records in
1024+0 records out
w bytes (x GB) copied, y s, z MB/s

Next, clear the buffer-cache to accurately measure read speeds directly from the device:

# echo 3 > /proc/sys/vm/drop_caches
$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
w bytes (x GB) copied, y s, z MB/s

Now that the last file is in the buffer, repeat the command to see the speed of the buffer-cache:

$ dd if=tempfile of=/dev/null bs=1M count=1024
1024+0 records in
1024+0 records out
w bytes (x GB) copied, y s, z GB/s
Note: One should run the above command 4-5 times and manually average the results for an accurate evaluation of the buffer read speed.

Finally, delete the temp file

$ rm tempfile

Caveats

Some SSD controllers have compression hardware, which may skew benchmark results. See http://www.pugetsystems.com/labs/articles/SSDs-Advertised-vs-Actual-Performance-179/