Difference between revisions of "Software RAID and LVM"

From ArchWiki
Jump to: navigation, search
(Introduction)
(Setup LVM and Create the / (root) LVM Volume)
Line 146: Line 146:
 
{{note|Since the RAID synchronization is transparent to the file-system you can proceed with the installation, but you should '''not''' reboot the machine until the drives have settled.}}
 
{{note|Since the RAID synchronization is transparent to the file-system you can proceed with the installation, but you should '''not''' reboot the machine until the drives have settled.}}
  
=== Setup LVM and Create the / (root) LVM Volume===
+
=== LVM installation ===
 
This is where you create the LVM volumes. LVM works with abstract layers, check out [[LVM|LVM]] and/or its documentation to discover more. What you will be doing in short:
 
This is where you create the LVM volumes. LVM works with abstract layers, check out [[LVM|LVM]] and/or its documentation to discover more. What you will be doing in short:
 
* Turn block devices (e.g. /dev/sda1 or /dev/md0) into Physical Volume(s) that can be used by LVM
 
* Turn block devices (e.g. /dev/sda1 or /dev/md0) into Physical Volume(s) that can be used by LVM

Revision as of 19:57, 31 August 2011

Warning: This is NOT an article. This is a work-in-progress revision of Installing with Software RAID or LVM. You're welcome to contribute edits to this page.

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary link Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary end

The combination of RAID and LVM provides numerous features with few caveats compared to just using RAID.

Preface

Although RAID and LVM may seem like analogous technologies they each present unique features.

RAID

Template:Wikipedia Redundant Array of Independent Disks (RAID) is designed to prevent data loss in the event of a hard disk failure. There are different levels of RAID. RAID 0 (striping) is not really RAID at all, because it provides no redundancy. It does, however, provide a speed benefit. This example will utilize RAID 0 for swap, on the assumption that a desktop system is being used, where the speed increase is worth the possibility of system crash if one of your drives fails. On a server, a RAID 1 or RAID 5 array is more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.

RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.

RAID 5 requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.

Redundancy

Warning: Installing a system with RAID is a complex process that may destroy data. Be sure to backup all data before proceeding.

RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID will not protect your data. Therefore it is important to make backups. Whether you use tape drives, DVDs, CDROMs or another computer, keep an current copy of your data out of your computer (and preferably offsite). Get into the habit of making regular backups. You can also divide the data on your computer into current and archived directories. Then back up the current data frequently, and the archived data occasionally.

LVM

LVM (Logical Volume Management) makes use of the device-mapper feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.

This is strictly an ease-of-management issue: it does not provide any addition security. However, it sits nicely with the other two technologies we are using.

Note that LVM is not used for the boot partition, because of the bootloader problem.

Introduction

Note: If you use partitions larger than 2TB you must use GUID Partition Table (see: Gentoo Wiki).

This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as Template:Filename, Template:Filename, and Template:Filename. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.

Tip: It is also good practice to ensure that only the drives involved in the installation are attached while performing the installation.
LVM Logical Volumes Template:Codeline Template:Codeline Template:Codeline Template:Codeline
LVM Volume Groups
RAID Arrays Template:Codeline Template:Codeline
Hard Drives Template:Codeline Template:Codeline Template:Codeline

Swap space

Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a logical volume. When using LVM with RAID5 a separate swap array is not necessary. Many tutorials will create a separate RAID1 array for the swap space, but that provides unnecessary redundancy. In the case in which that would be helpful (i.e. two drives fail) you would have likely already lost all other data in your file system.

Procedure

Obtain the latest installation media and boot the Arch Linux installer as outlined in the Beginners' Guide, or alternatively, in the Official Arch Linux Install Guide. Follow the directions outlined there until you have reached the Prepare Hard Drive section.

Load kernel modules

Once you have entered the installer open another virtual console by typing ALT + F[2-6]. Load the appropriate RAID (e.g. Template:Filename, Template:Filename, Template:Filename, Template:Filename) and LVM (i.e. Template:Filename) modules. The following example makes use of RAID1 and RAID5.

# modprobe raid1
# modprobe raid5
# modprobe dm-mod

Partition the hard drives

Note: If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to Activate existing RAID devices and LVM volumes. This can be achieved with alternative partitioning software (see: Article).

Each hard drive will have a 100MB Template:Codeline partition and a Template:Codeline partition that takes up the remainder of the disk. The boot partition must be RAID1, because Grub does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot-loader can boot normally from the other two partitions in the Template:Codeline array. The remainder of the hard drive will contain a RAID5 array for rest of the file-system.

We will use cfdisk to create two partitions on each of the three hard drives (i.e. Template:Filename, Template:Filename, Template:Filename, Template:Filename, Template:Filename, and Template:Filename).

   Name        Flags      Part Type  FS Type          [Label]        Size (MB)
-------------------------------------------------------------------------------
   sda1        Boot        Primary   linux_raid_m     [boot]            100.00
   sda2                    Primary   linux_raid_m     [root]          79900.00
Note: In Template:Codeline you can use the first letter of each Template:Codeline to select it, with the exception of the Template:Codeline command, which requires you also hold Template:Codeline to select it.

Open Template:Codeline with the first hard drive:

# cfdisk /dev/sda

and create the three partitions in order:

  1. Select Template:Codeline.
  2. Hit Template:Codeline to make it a Template:Codeline partition.
  3. For Template:Filename and Template:Filename type the appropriate size in MB (see above). For Template:Filename just hit Template:Codeline to select the remainder of the drive.
  4. Hit Template:Codeline to place the partition at the Template:Codeline.
  5. Select Template:Codeline and hit Template:Codeline to see the second page of the list, and then type FD for the Linux RAID Autodetect type.
  6. For Template:Filename select Template:Codeline.
  7. Hit the down arrow (selecting the remaining free space) to go on to the next partition to be created.

When you are done, select Template:Codeline, and confirm by typing Template:Codeline to write the partition table to the disk. When finished select Template:Codeline and repeat this process for Template:Filename and Template:Filename or use the alternate Template:Codeline method below.

Note: Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but the redundant partition will be in multiples of the size of the smallest partition, leaving the unallocated space to waste.

Clone partitions with sfdisk

You can also use Template:Codeline to clone the partition table from Template:Filename to the other two hard drives.

You can either use the following command:

# sfdisk -d /dev/sda | sfdisk /dev/sdb
# sfdisk -d /dev/sda | sfdisk /dev/sdc

or, dump the partition table from Template:Filename into a file:

# sfdisk -d /dev/sda > table

and then write the partition table to the other two hard drives.

# sfdisk /dev/sdb < table
# sfdisk /dev/sdc < table

Create the RAID Redundant Partitions

After creating the physical partitions, you are ready to setup the Template:Codeline and Template:Codeline arrays with Template:Codeline. It is an advanced tool for RAID management that will be used to create a Template:Filename within the installation environment.

Create the Template:Codeline array at Template:Filename:

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]2

Some boot loaders (e.g. GRUB, LILO, and SYSLINUX) will not support the default style of metadata created by mdadm (i.e. 1.2), and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option --metadata=0.90 to the following command. GRUB2 supports the new 1.x versions of the metadata when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio.

Create the Template:Codeline array at Template:Filename:

# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]1

Synchronization

Tip: If you want to avoid the initial resync with new hard drives add the Template:Codeline flag.

After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by following Template:Filename:

# cat /proc/mdstat

or refresh the output of Template:Filename ten times per second with:

# watch -n .1 cat /proc/mdstat

Further information about the arrays is accessible with:

# mdadm --misc --detail /dev/md[01]
Note: Since the RAID synchronization is transparent to the file-system you can proceed with the installation, but you should not reboot the machine until the drives have settled.

LVM installation

This is where you create the LVM volumes. LVM works with abstract layers, check out LVM and/or its documentation to discover more. What you will be doing in short:

  • Turn block devices (e.g. /dev/sda1 or /dev/md0) into Physical Volume(s) that can be used by LVM
  • Create a Volume Group consisting of Physical Volume(s)
  • Create Logical Volume(s) within the Volume Group

Note: If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.

To mount the sysfs partition, do:

# mkdir /sys
# mount -t sysfs none /sys

Let us get started:

Make sure that the device-mapper module is loaded:

# modprobe dm-mod

Now you need to do is tell LVM you have a Physical Volume for it to use. It is really a virtual RAID volume (/dev/md0), but LVM does not know this, or really care. Do:

# pvcreate /dev/md0

This might fail if you are using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.

LVM should report back that it has added the Physical Volume. You can confirm this with:

# pvdisplay

Now it is time to create a Volume Group (which I will call array) which has control over the LVM Physical Volume we created. Do:

# vgcreate array /dev/md0

LVM should report that it has created the Volume Group array. You can confirm this with:

# vgdisplay

Next, we create a Logical Volume called root in Volume Group array that fills all the free space left on the volume group:

# lvcreate -l +100%FREE array -n root

LVM should report that it created the Logical Volume root. You can confirm this with:

# lvdisplay

The LVM volume should now be available as /dev/mapper/array-root. Or something similar, LVM will also be able to tell you which when you issue the display command.

Activate existing RAID devices and LVM volumes

If you already have RAID partitions created on your system and you have also set up LVM and all you want is enabling them follow this simple procedure. This might come in handy if you are switching distributions and do not want to lose data in /home for example.

First you need to enable RAID support. RAID1 and RAID5 in this case.

# modprobe raid1
# modprobe raid5

Activate RAID devices: md1 for /boot and md0 for LVM where two logical volumes will reside.

# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
# mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1

RAID devices should now be enabled. Check /proc/mdstat.

If you have not loaded kernel LVM support do so now.

# modprobe dm-mod

Startup of LVM requires just the following two commands:

# vgscan
# vgchange -ay

You can now jump to [3] Set Filesystem Mountpoints in your menu based setup and mount created partitions as needed.

Create and Mount the Filesystems

When you are using a setup that is newer then 2008.03; this step is optional!

Example using ReiserFS (V3):

To create /boot:

# mkreiserfs /dev/md1

To create swap space:

# mkswap /dev/md2

To create /:

# mkreiserfs /dev/array/root

Now, mount the boot and root partitions where the installer expects them:

# mount /dev/array/root /mnt
# mkdir /mnt/boot
# mount /dev/md1 /mnt/boot

We have created all our filesystems! And we are ready to install the OS!

Install and Configure Arch

This section does not attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you are having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.

Now you can continue using the installer to set-up the system and install the packages you need. Here is the walkthrough:

  • Type /arch/setup to launch the main installer.
  • Select < OK > at the opening screen.
  • Select 1 CD_ROM to install from CD-ROM (or 2 FTP if you have a local Arch mirror on FTP).
  • If you have skipped the optional step (Create and Mount the Filesystems) above, and have not created a fileystem yet, select 1 Prepare Hard Drive > 3 Set Filesystem Mountpoints and create your filesystems and mountpoints here
  • Now at the main menu, Select 2 Select Packages and select all the packages in the base category, as well as the mdadm and lvm2 packages from the system category. Note: mdadm & lvm2 are included in base category since arch-base-0.7.2.
  • Select 3 Install Packages. This will take a little while.
  • Note: Because the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automaticly for you. So let us delete the original and have mdadm create you a new one with the currect setup:
    Press Alt-F2 to get a new terminal an log in, then do
# mdadm --examine --scan > /mnt/etc/mdadm.conf
  • Select 4 Configure System:

Add the dm_mod module to the MODULES list in /etc/mkinitcpio.conf.

Add the mdadm and lvm2 hook to the HOOKS list in /etc/mkinitcpio.conf after udev. See Configuring mkinitpcio using RAID for more details.

Edit your /etc/rc.conf. It should contain a USELVM entry already, which you should change to:

USELVM="yes"

Please Note: The rc.sysinit script that parses the USELVM variable entry will accept either yes or YES, however it will not accept mixed case. Please be sure you have got your capitalization correct.

Edit your /etc/fstab to contain the entries:

/dev/array/root         /       reiserfs        defaults        0       1
/dev/md2                swap    swap            defaults        0       0
/dev/md1                /boot   reiserfs        defaults        0       0

At this point, make any other configuration changes you need to other files.

Then exit the configuration menu.

Since you will not be installing Grub from the installer, select 7 Exit Install to leave the installer program.


Old style:

Then specify the raid array you are booting from in /mnt/boot/grub/menu.lst like:

 # Example with /dev/array/root for / & /dev/md1 for /boot:
   kernel /vmlinuz-linux root=/dev/array/root ro  md=1,/dev/sda1,/dev/sdb1,/dev/sdc1 md=0,/dev/sda3,/dev/sdb3,/dev/sdc3


Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).

The arrays can be assembled on boot by the kernel using that hook and the contents of /etc/mdadm.conf, which is included in the initrd image when it is built. (See Configuring mkinitpcio using RAID )

An example of a GRUB boot configuration for booting of a root on an LVM volume like this:

# (0) Arch Linux
title  Arch Linux
root   (hd0,0)
kernel /vmlinuz-linux root=/dev/array/root ro
initrd /initramfs-linux.img

Install Grub on the Primary Hard Drive

grub 0.97

This can also be done from the installer just fine now (2009.08 and should also work for 2009.02)

This is the last and final step before you have a bootable system!

As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you are effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive.

Copy the GRUB files into place and get into our chroot:

# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
# sync
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash

At this point, you may no longer be able to see keys you type at your console. I am not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing reset at the prompt.

Once you have got console echo back on, type:

# grub

After a short wait while grub does some looking around, it should come back with a grub prompt. Do:

grub> root (hd0,0)
grub> setup (hd0)
grub> quit

That is it. You can exit your chroot now by hitting CTRL-D or typing exit.

grub 1.98

You can also install grub2 when you are in the chroot environment.

# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt /bin/bash

Install and configure grub2

root@pc-chroot:~# pacman -S grub2
root@pc-chroot:~# grub-mkconfig -o /boot/grub/grub.cfg
root@pc-chroot:~# grub-install --no-floppy --modules="raid" /dev/sda
root@pc-chroot:~# grub-install --no-floppy --modules="raid" /dev/sdb

Reboot

The hard part is all over! Now remove the CD from your CD-ROM drive, and type:

# reboot

Install Grub on the Alternate Boot Drives

Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:

# grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdc
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Archive your Filesystem Partition Scheme

Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the sfdisk tool and the following steps:

# mkdir /etc/partitions
# sfdisk --dump /dev/sda >/etc/partitions/disc0.partitions
# sfdisk --dump /dev/sdb >/etc/partitions/disc1.partitions
# sfdisk --dump /dev/sdc >/etc/partitions/disc2.partitions

Management

For LVM management, please have a look at LVM

Mounting from a Live CD

If you want to mount your RAID partition from a Live CD, use

# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3

(or whatever mdX and drives apply to you)

Note: Live CDs like SystemrescueCD assemble the RAID arrays automatically at boot time if you used the partition type fd at the install of the array)

Removing device, stop using the array

You can remove a device from the array after you mark it as faulty.

# mdadm --fail /dev/md0 /dev/sdxx

Then you can remove it from the array.

# mdadm -r /dev/md0 /dev/sdxx

Remove device permanently (for example in the case you want to use it individally from now on). Issue the two commands described above then:

# mdadm --zero-superblock /dev/sdxx

After this you can use the disk as you did before creating the array.

Warning: If you reuse the removed disk without zeroing the superblock you will LOSE all your data next boot. (After mdadm will try to use it as the part of the raid array). DO NOT issue this command on linear or RAID0 arrays or you will LOSE all your data on the raid array.

Stop using an array:

  1. Umount target array
  2. Repeat the three command described in the beginning of this section on each device.
  3. Stop the array with: mdadm --stop /dev/md0
  4. Remove the corresponding line from /etc/mdadm.conf

Adding a device to the array

Adding new devices with mdadm can be done on a running system with the devices mounted. Partition the new device "/dev/sdx" using the same layout as one of those already in the arrays "/dev/sda".

# sfdisk -d /dev/sda > table
# sdfisk /dev/sdx < table

Assemble the RAID arrays if they are not already assembled:

# mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
# mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3

First, add the new device as a Spare Device to all of the arrays. We will assume you have followed the guide and use separate arrays for /boot RAID 1 (/dev/md1), swap RAID 1 (/dev/md2) and root RAID 5 (/dev/md0).

# mdadm --add /dev/md1 /dev/sdx1
# mdadm --add /dev/md2 /dev/sdx2
# mdadm --add /dev/md0 /dev/sdx3

This should not take long for mdadm to do. Check the progress with:

# cat /proc/mdstat

Check that the device has been added with the command:

# mdadm --misc --detail /dev/md0

It should be listed as a Spare Device.

Tell mdadm to grow the arrays from 3 devices to 4 (or however many devices you want to use):

# mdadm --grow -n 4 /dev/md1
# mdadm --grow -n 4 /dev/md2
# mdadm --grow -n 4 /dev/md0

This will probably take several hours. You need to wait for it to finish before you can continue. Check the progress in /proc/mdstat. The RAID 1 arrays should automatically sync /boot and swap but you need to install Grub on the MBR of the new device manually. Installing_with_Software_RAID_or_LVM#Install_Grub_on_the_Alternate_Boot_Drives

The rest of this guide will explain how to resize the underlying LVM and filesystem on the RAID 5 array.

Note: I am not sure if this can be done with the volumes mounted and will assume you are booting from a live-cd/usb

If you are have encrypted your LVM volumes with LUKS, you need resize the LUKS volume first. Otherwise, ignore this step.

# cryptsetup luksOpen /dev/md0 cryptedlvm
# cryptsetup resize cryptedlvm

Activate the LVM volume groups:

# vgscan
# vgchange -ay

Resize the LVM Physical Volume /dev/md0 (or e.g. /dev/mapper/cryptedlvm if using LUKS) to take up all the available space on the array. You can list them with the command "pvdisplay".

# pvresize /dev/md0

Resize the Logical Volume you wish to allocate the new space to. You can list them with "lvdisplay". Assuming you want to put it all to your /home volume:

# lvresize -l +100%FREE /dev/array/home

To resize the filesystem to allocate the new space use the appropriate tool. If using ext2 you can resize a mounted filesystem with ext2online. For ext3 you can use resize2fs or ext2resize but not while mounted.

You should check the filesystem before resizing.

# e2fsck -f /dev/array/home
# resize2fs /dev/array/home

Read the manuals for lvresize and resize2fs if you want to customize the sizes for the volumes.

Troubleshooting

If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in /boot/grub/menu.lst accordingly. This is what happened to me anyway.

Recovering from a broken or missing drive in the raid

You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to fore the raid to still turn on even with one disk short. Type this (change where needed):

# mdadm --manage /dev/md0 --run

Now you should be able to mount it again with something like this (if you had it in fstab):

# mount /dev/md0

Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Partition_the_Hard_Drives. Once that is done you can add the new disk to the raid by doing:

# mdadm --manage --add /dev/md0 /dev/sdd1

If you type:

# cat /proc/mdstat

you probably see that the raid is now active and rebuilding.

You also might want to update your /etc/mdadm.conf file by typing:

# mdadm --examine --scan > /etc/mdadm.conf

That should be about all steps required to recover your raid. It certainly worked for me when i had lost a dive due to a partition table corruption.

Benchmarking

There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.

Tiobench specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.

Bonnie++ tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed ZCAV program tests the performance of different zones of a hard drive without writing any data to the disk.

Template:Codeline should NOT be used to benchmark a RAID, because it provides very inconsistent results.

Additional Resources

LVM

Software RAID

RAID & LVM

Forums threads