Difference between revisions of "Software RAID and LVM"

From ArchWiki
Jump to: navigation, search
(Install boot loader)
m (Confirmed: As of this edit, syslinux does autodetect RAID and will install the bootloader on each partition in the array while in the chroot.)
(47 intermediate revisions by 9 users not shown)
Line 1: Line 1:
{{warning|This is '''NOT''' an article. This is a work-in-progress revision of [[Installing with Software RAID or LVM]]. You're welcome to contribute edits to this page.}}
+
[[ru:Installing with Software RAID or LVM]]
 
+
[[Category:Getting and installing Arch]]
 +
[[Category:File systems]]
 
{{Article summary start}}
 
{{Article summary start}}
 
{{Article summary text|This article will provide an example of how to install and configure Arch Linux with a software RAID or Logical Volume Manager (LVM).}}
 
{{Article summary text|This article will provide an example of how to install and configure Arch Linux with a software RAID or Logical Volume Manager (LVM).}}
Line 6: Line 7:
 
{{Article summary link|Software|}}
 
{{Article summary link|Software|}}
 
{{Article summary heading|Related}}
 
{{Article summary heading|Related}}
 +
{{Article summary wiki|RAID}}
 +
{{Article summary wiki|LVM}}
 +
{{Article summary wiki|Installing with Fake RAID}}
 
{{Article summary wiki|Convert a single drive system to RAID}}
 
{{Article summary wiki|Convert a single drive system to RAID}}
{{Article summary wiki|Installing with Fake RAID}}
 
{{Article summary wiki|LVM}}
 
 
{{Article summary end}}
 
{{Article summary end}}
  
The combination of RAID and LVM provides numerous features with few caveats compared to just using RAID.
+
The combination of [[RAID]] and [[LVM]] provides numerous features with few caveats compared to just using RAID.
  
== Preface ==
+
== Introduction ==
Although RAID and LVM may seem like analogous technologies they each present unique features.
+
{{warning|Be sure to review the [[RAID]] article and be aware of all applicable warnings, particularly if you select RAID5.}}
  
=== RAID ===
+
Although [[RAID]] and [[LVM]] may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as {{ic|/dev/sda}}, {{ic|/dev/sdb}}, and {{ic|/dev/sdc}}. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.
{{Wikipedia|RAID}}
+
Redundant Array of Independent Disks (RAID) is designed to prevent data loss in the event of a hard disk failure. There are different [[Wikipedia:Standard RAID levels|levels of RAID]]. [[Wikipedia:Standard RAID levels#RAID 0|RAID 0]] (striping) is not really RAID at all, because it provides no redundancy. It does, however, provide a speed benefit. This example will utilize RAID 0 for swap, on the assumption that a desktop system is being used, where the speed increase is worth the possibility of system crash if one of your drives fails. On a server, a RAID 1 or RAID 5 array is more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.
+
 
+
[[Wikipedia:Standard RAID levels#RAID 1|RAID 1]] is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.
+
 
+
[[Wikipedia:Standard RAID levels#RAID 5|RAID 5]] requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.
+
 
+
==== Redundancy ====
+
{{Warning|Installing a system with RAID is a complex process that may destroy data. Be sure to backup all data before proceeding.}}
+
 
+
RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID will not protect your data. Therefore it is important to make backups. Whether you use tape drives, DVDs, CDROMs or another computer, keep an current copy of your data out of your computer (and preferably offsite). Get into the habit of making regular backups. You can also divide the data on your computer into current and archived directories. Then back up the current data frequently, and the archived data occasionally.
+
 
+
=== LVM ===
+
[[LVM]] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.
+
 
+
This is strictly an ease-of-management issue: it does not provide any addition security. However, it sits nicely with the other two technologies we are using.
+
 
+
Note that LVM is not used for the boot partition, because of the bootloader problem.
+
 
+
== Introduction ==
+
This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as {{filename|/dev/sda}}, {{filename|/dev/sdb}}, and {{filename|/dev/sdc}}. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.
+
  
 
{{tip|It is good practice to ensure that only the drives involved in the installation are attached while performing the installation.}}
 
{{tip|It is good practice to ensure that only the drives involved in the installation are attached while performing the installation.}}
Line 43: Line 24:
 
{| border="1" width="100%" style="text-align:center;"
 
{| border="1" width="100%" style="text-align:center;"
 
|width="150px" align="left" | '''LVM Logical Volumes'''
 
|width="150px" align="left" | '''LVM Logical Volumes'''
|{{codeline|/}}
+
|{{ic|/}}
|{{codeline|/var}}
+
|{{ic|/var}}
|{{codeline|/swap}}
+
|{{ic|/swap}}
|{{codeline|/home}}
+
|{{ic|/home}}
 
|}
 
|}
 
{| border="1" width="100%" style="text-align:center;"
 
{| border="1" width="100%" style="text-align:center;"
 
|width="150px" align="left" | '''LVM Volume Groups'''
 
|width="150px" align="left" | '''LVM Volume Groups'''
|{{filename|/dev/VolGroupArray}}
+
|{{ic|/dev/VolGroupArray}}
 
|}
 
|}
 
{| border="1" width="100%" style="text-align:center;"
 
{| border="1" width="100%" style="text-align:center;"
 
|width="150px" align="left" | '''RAID Arrays'''
 
|width="150px" align="left" | '''RAID Arrays'''
|{{filename|/dev/md0}}
+
|{{ic|/dev/md0}}
|{{filename|/dev/md1}}
+
|{{ic|/dev/md1}}
 
|}
 
|}
 
{| border="1" width="100%" style="text-align:center;"
 
{| border="1" width="100%" style="text-align:center;"
 
|width="150px" align="left" | '''Physical Partitions'''
 
|width="150px" align="left" | '''Physical Partitions'''
|{{filename|/dev/sda1}}
+
|{{ic|/dev/sda1}}
|{{filename|/dev/sdb1}}
+
|{{ic|/dev/sdb1}}
|{{filename|/dev/sdc1}}
+
|{{ic|/dev/sdc1}}
|{{filename|/dev/sda2}}
+
|{{ic|/dev/sda2}}
|{{filename|/dev/sdb2}}
+
|{{ic|/dev/sdb2}}
|{{filename|/dev/sdc2}}
+
|{{ic|/dev/sdc2}}
 
|}
 
|}
 
{| border="1" width="100%" style="text-align:center;"
 
{| border="1" width="100%" style="text-align:center;"
 
|width="150px" align="left" | '''Hard Drives'''
 
|width="150px" align="left" | '''Hard Drives'''
|{{filename|/dev/sda}}
+
|{{ic|/dev/sda}}
|{{filename|/dev/sdb}}
+
|{{ic|/dev/sdb}}
|{{filename|/dev/sdc}}
+
|{{ic|/dev/sdc}}
 
|}
 
|}
  
Line 80: Line 61:
 
=== MBR vs. GPT ===
 
=== MBR vs. GPT ===
 
{{Wikipedia|GUID Partition Table}}
 
{{Wikipedia|GUID Partition Table}}
The widespread [[Master Boot Record]] (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. [[GUID Partition Table]] (GPT) is a new standard for the layout of the partition table based on the [[Wikipedia:Unified Extensible Firmware Interface|UEFI]] specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating a partition at the beginning of each disk for the boot loader (see: [[GRUB2#GPT specific instructions|GPT specific instructions]]).
+
The widespread [[Master Boot Record]] (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. [[GUID Partition Table]] (GPT) is a new standard for the layout of the partition table based on the [[Wikipedia:Unified Extensible Firmware Interface|UEFI]] specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating an additional partition at the beginning of each disk for GRUB2 (see: [[GRUB2#GPT specific instructions|GPT specific instructions]]).
  
 
=== Boot loader ===
 
=== Boot loader ===
 
This tutorial will use [[Syslinux|SYSLINUX]] instead of [[GRUB2]]. GRUB2 when used in conjunction with [[GUID Partition Table|GPT]] requires an additional [[GRUB2#GPT specific instructions|BIOS Boot Partition]]. Additionally, the [[DeveloperWiki:2011.08.19|2011.08.19]] Arch Linux installer does not support GRUB2.
 
This tutorial will use [[Syslinux|SYSLINUX]] instead of [[GRUB2]]. GRUB2 when used in conjunction with [[GUID Partition Table|GPT]] requires an additional [[GRUB2#GPT specific instructions|BIOS Boot Partition]]. Additionally, the [[DeveloperWiki:2011.08.19|2011.08.19]] Arch Linux installer does not support GRUB2.
  
However, both boot loaders support the current default style of metadata created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with [[mkinitcpio]].  
+
GRUB2 supports the default style of metadata currently created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with [[mkinitcpio]]. SYSLINUX only supports version 1.0, and therefore requires the {{ic|<nowiki>--metadata=1.0</nowiki>}} option.
  
Some boot loaders (e.g. [[GRUB]], [[LILO]]) will not support the new 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option {{codeline|<nowiki>--metadata=0.90</nowiki>}} to the {{codeline|/boot}} array during [[#RAID installation|RAID installation]].
+
Some boot loaders (e.g. [[GRUB]], [[LILO]]) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option {{ic|<nowiki>--metadata=0.90</nowiki>}} to the {{ic|/boot}} array during [[#RAID installation|RAID installation]].
  
 
== Installation ==
 
== Installation ==
Line 93: Line 74:
  
 
==== Load kernel modules ====
 
==== Load kernel modules ====
Enter another TTY terminal by typing {{Keypress|Alt}}+{{Keypress|F2}}. Load the appropriate RAID (e.g. {{filename|raid0}}, {{filename|raid1}}, {{filename|raid5}}, {{filename|raid6}}, {{filename|raid10}}) and LVM (i.e. {{filename|dm-mod}}) modules. The following example makes use of RAID1 and RAID5.
+
Enter another TTY terminal by typing {{Keypress|Alt}}+{{Keypress|F2}}. Load the appropriate RAID (e.g. {{ic|raid0}}, {{ic|raid1}}, {{ic|raid5}}, {{ic|raid6}}, {{ic|raid10}}) and LVM (i.e. {{ic|dm-mod}}) modules. The following example makes use of RAID1 and RAID5.
 
  # modprobe raid1
 
  # modprobe raid1
 
  # modprobe raid5
 
  # modprobe raid5
Line 101: Line 82:
 
{{note|If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to [[Installing_with_Software_RAID_or_LVM#Activate_existing_RAID_devices_and_LVM_volumes|Activate existing RAID devices and LVM volumes]]. This can be achieved with alternative partitioning software (see: [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Article]).}}
 
{{note|If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to [[Installing_with_Software_RAID_or_LVM#Activate_existing_RAID_devices_and_LVM_volumes|Activate existing RAID devices and LVM volumes]]. This can be achieved with alternative partitioning software (see: [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Article]).}}
  
Each hard drive will have a 100MB {{codeline|/boot}} partition, 2048MB {{codeline|/swap}} partition, and a {{codeline|/}} partition that takes up the remainder of the disk.
+
Each hard drive will have a 100MB {{ic|/boot}} partition, 2048MB {{ic|/swap}} partition, and a {{ic|/}} partition that takes up the remainder of the disk.
  
The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the {{codeline|/boot}} array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).
+
The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the {{ic|/boot}} array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).
  
 
==== Install gdisk ====
 
==== Install gdisk ====
Since most disk partitioning software does not support GPT (i.e. {{Package Official|fdisk}}, {{Package Official|sfdisk}}) you will need to install {{Package Official|gptfdisk}} to set the partition type of the boot loader partitions.
+
Since most disk partitioning software does not support GPT (i.e. {{Pkg|fdisk}}, {{Pkg|sfdisk}}) you will need to install {{Pkg|gptfdisk}} to set the partition type of the boot loader partitions.
  
 
Update the [[pacman]] database:
 
Update the [[pacman]] database:
Line 114: Line 95:
 
  $ pacman -Syy
 
  $ pacman -Syy
  
Install {{Package Official|gptfdisk}}:
+
Install {{Pkg|gptfdisk}}:
 
  $ pacman -S gdisk
 
  $ pacman -S gdisk
  
 
==== Partition hard drives ====
 
==== Partition hard drives ====
We will use <code>gdisk</code> to create three partitions on each of the three hard drives (i.e. {{filename|/dev/sda}}, {{filename|/dev/sdb}}, {{filename|/dev/sdc}}):
+
We will use <code>gdisk</code> to create three partitions on each of the three hard drives (i.e. {{ic|/dev/sda}}, {{ic|/dev/sdb}}, {{ic|/dev/sdc}}):
  
 
     Name        Flags      Part Type  FS Type          [Label]        Size (MB)
 
     Name        Flags      Part Type  FS Type          [Label]        Size (MB)
Line 126: Line 107:
 
     sda3                    Primary  linux_raid_m                    97900.00  # /
 
     sda3                    Primary  linux_raid_m                    97900.00  # /
  
Open {{Codeline|gdisk}} with the first hard drive:
+
Open {{ic|gdisk}} with the first hard drive:
 
  $ gdisk /dev/sda
 
  $ gdisk /dev/sda
  
Line 133: Line 114:
 
# Select the default partition number: {{Keypress|Enter}}
 
# Select the default partition number: {{Keypress|Enter}}
 
# Use the default for the first sector: {{Keypress|Enter}}
 
# Use the default for the first sector: {{Keypress|Enter}}
# For {{filename|sda1}} and {{filename|sda2}} type the appropriate size in MB (i.e. {{codeline|+100MB}} and {{codeline|+2048M}}). For {{filename|sda3}} just hit {{Keypress|Enter}} to select the remainder of the disk.
+
# For {{ic|sda1}} and {{ic|sda2}} type the appropriate size in MB (i.e. {{ic|+100MB}} and {{ic|+2048M}}). For {{ic|sda3}} just hit {{Keypress|Enter}} to select the remainder of the disk.
# Select {{codeline|Linux RAID}} as the partition type: {{Codeline|fb00}}
+
# Select {{ic|Linux RAID}} as the partition type: {{ic|fd00}}
 
# Write the table to disk and exit: {{Keypress|w}}
 
# Write the table to disk and exit: {{Keypress|w}}
  
Repeat this process for {{filename|/dev/sdb}} and {{filename|/dev/sdc}} or use the alternate {{Codeline|sgdisk}} method below. You may need to reboot to allow the kernel to recognize the new tables.
+
Repeat this process for {{ic|/dev/sdb}} and {{ic|/dev/sdc}} or use the alternate {{ic|sgdisk}} method below. You may need to reboot to allow the kernel to recognize the new tables.
  
 
{{note|Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but ''the redundant partition will be in multiples of the size of the smallest partition'', leaving the unallocated space to waste.}}
 
{{note|Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but ''the redundant partition will be in multiples of the size of the smallest partition'', leaving the unallocated space to waste.}}
  
 
==== Clone partitions with sgdisk ====
 
==== Clone partitions with sgdisk ====
If you are using GPT, then you can use {{Codeline|sgdisk}} to clone the partition table from {{filename|/dev/sda}} to the other two hard drives:
+
If you are using GPT, then you can use {{ic|sgdisk}} to clone the partition table from {{ic|/dev/sda}} to the other two hard drives:
 
  $ sgdisk --backup=table /dev/sda
 
  $ sgdisk --backup=table /dev/sda
 
  $ sgdisk --load-backup=table /dev/sdb
 
  $ sgdisk --load-backup=table /dev/sdb
Line 148: Line 129:
  
 
=== RAID installation ===
 
=== RAID installation ===
After creating the physical partitions, you are ready to setup the {{codeline|/boot}}, {{codeline|/swap}}, and {{codeline|/}} arrays with {{Codeline|mdadm}}. It is an advanced tool for RAID management that will be used to create a {{filename|/etc/mdadm.conf}} within the installation environment.
+
After creating the physical partitions, you are ready to setup the {{ic|/boot}}, {{ic|/swap}}, and {{ic|/}} arrays with {{ic|mdadm}}. It is an advanced tool for RAID management that will be used to create a {{ic|/etc/mdadm.conf}} within the installation environment.
  
Create the {{Codeline|/}} array at {{filename|/dev/md0}}:
+
Create the {{ic|/}} array at {{ic|/dev/md0}}:
 
  # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3
 
  # mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3
  
Create the {{Codeline|/boot}} array at {{filename|/dev/md1}}:
+
Create the {{ic|/swap}} array at {{ic|/dev/md1}}:
 
  # mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2
 
  # mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2
  
{{note|If you plan on installing a boot loader that does not support the 1.x version of RAID metadata make sure to add the {{codeline|<nowiki>--metadata=0.90</nowiki>}} option to the following command.}}
+
{{note|If you plan on installing a boot loader that does not support the 1.x version of RAID metadata make sure to add the {{ic|<nowiki>--metadata=0.90</nowiki>}} option to the following command.}}
  
Create the {{Codeline|/boot}} array at {{filename|/dev/md2}}:
+
Create the {{ic|/boot}} array at {{ic|/dev/md2}}:
  # mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/sd[abc]1
+
  # mdadm --create /dev/md2 --level=1 --raid-devices=3 --metadata=1.0 /dev/sd[abc]1
  
 
==== Synchronization ====
 
==== Synchronization ====
{{tip|If you want to avoid the initial resync with new hard drives add the {{codeline|--assume-clean}} flag.}}
+
{{tip|If you want to avoid the initial resync with new hard drives add the {{ic|--assume-clean}} flag.}}
  
After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of {{filename|/proc/mdstat}} ten times per second with:
+
After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of {{ic|/proc/mdstat}} ten times per second with:
 
  # watch -n .1 cat /proc/mdstat
 
  # watch -n .1 cat /proc/mdstat
  
Line 170: Line 151:
  
 
Further information about the arrays is accessible with:
 
Further information about the arrays is accessible with:
  # mdadm --misc --detail /dev/md[01]
+
  # mdadm --misc --detail /dev/md[012] | less
Once synchronization is complete the {{codeline|State}} line should read {{codeline|clean}}. Each device in the table at the bottom of the output should read {{codeline|spare}} or  {{codeline|active sync}} in the {{codeline|State}} column. {{codeline|active sync}} means each device is actively in the array.
+
Once synchronization is complete the {{ic|State}} line should read {{ic|clean}}. Each device in the table at the bottom of the output should read {{ic|spare}} or  {{ic|active sync}} in the {{ic|State}} column. {{ic|active sync}} means each device is actively in the array.
  
{{note|Since the RAID synchronization is transparent to the file-system you can proceed with the installation, but you should '''not''' reboot the machine until the drives have settled.}}
+
{{note|Since the RAID synchronization is transparent to the file-system you can proceed with the installation and reboot your computer when necessary.}}
 +
 
 +
==== Data Scrubbing ====
 +
 
 +
It is good practice to regularly run data scrubbing to check for and fix errors, especially on RAID 5 type volumes. See the [http://en.gentoo-wiki.com/wiki/RAID/Software#Data_Scrubbing Gentoo Wiki on Data Scrubbing] for details.
 +
 
 +
In short, the following will trigger a data scrub:
 +
# echo check >> /sys/block/mdX/md/sync_action
 +
 
 +
The progress can be watched with:
 +
# watch -n 1 cat /proc/mdstat
 +
 
 +
To stop a currently running data scrub safely:
 +
# echo idle >> /sys/block/mdX/md/sync_action
 +
 
 +
It is a good idea to set up a cron job as root to schedule a weekly scrub.
  
 
=== LVM installation ===
 
=== LVM installation ===
This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. {{codeline|/}}, {{codeline|/var}}, {{codeline|/home}}). If you did not understand that make sure you read the [[LVM#Introduction|LVM Introduction]] section.
+
This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. {{ic|/}}, {{ic|/var}}, {{ic|/home}}). If you did not understand that make sure you read the [[LVM#Introduction|LVM Introduction]] section.
  
 
==== Create physical volumes ====
 
==== Create physical volumes ====
Line 182: Line 178:
 
  # pvcreate /dev/md0
 
  # pvcreate /dev/md0
  
{{note|This might fail if you are creating PVs on an existing Volume Group. If so you might want to add {{codeline|-ff}} option.}}
+
{{note|This might fail if you are creating PVs on an existing Volume Group. If so you might want to add {{ic|-ff}} option.}}
  
 
Confirm that LVM has added the PVs with:  
 
Confirm that LVM has added the PVs with:  
Line 197: Line 193:
  
 
==== Create logical volumes ====
 
==== Create logical volumes ====
Now we need to create logical volumes (LVs) on the VG, much like we would normally [[Beginners Guide#Prepare Hard Drive|prepare a hard drive]]. In this example we will create separate {{codeline|/}}, {{codeline|/var}}, {{codeline|/swap}}, {{codeline|/home}} LVs. The LVs will be accessible as {{filename|/dev/mapper/VolGroupArray-<lvname>}} or {{filename|/dev/VolGroupArray/<lvname>}}.
+
Now we need to create logical volumes (LVs) on the VG, much like we would normally [[Beginners Guide#Prepare Hard Drive|prepare a hard drive]]. In this example we will create separate {{ic|/}}, {{ic|/var}}, {{ic|/swap}}, {{ic|/home}} LVs. The LVs will be accessible as {{ic|/dev/mapper/VolGroupArray-<lvname>}} or {{ic|/dev/VolGroupArray/<lvname>}}.
  
Create a {{codeline|/}} LV:
+
Create a {{ic|/}} LV:
 
  # lvcreate -L 20G VolGroupArray -n lvroot
 
  # lvcreate -L 20G VolGroupArray -n lvroot
  
Create a {{codeline|/var}} LV:
+
Create a {{ic|/var}} LV:
 
  # lvcreate -L 15G VolGroupArray -n lvvar
 
  # lvcreate -L 15G VolGroupArray -n lvvar
  
{{note|If you would like to add the swap space to the LVM create a {{codeline|/swap}} LV with the {{codeline|-C y}} option, which creates a contiguous partition, so that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents:
+
{{note|If you would like to add the swap space to the LVM create a {{ic|/swap}} LV with the {{ic|-C y}} option, which creates a contiguous partition, so that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents:
 
  # lvcreate -C y -L 2G VolGroupArray -n lvswap
 
  # lvcreate -C y -L 2G VolGroupArray -n lvswap
 
}}
 
}}
  
Create a {{codeline|/home}} LV that takes up the remainder of space in the VG:
+
Create a {{ic|/home}} LV that takes up the remainder of space in the VG:
 
  # lvcreate -l +100%FREE VolGroupArray -n lvhome
 
  # lvcreate -l +100%FREE VolGroupArray -n lvhome
  
Line 217: Line 213:
 
{{tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}
 
{{tip|You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.}}
  
=== <s>Update RAID configuration</s> ===
+
=== Update RAID configuration ===
Since the installer builds the initrd using {{filename|/etc/mdadm.conf}} in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it  contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:
+
Since the installer builds the initrd using {{ic|/etc/mdadm.conf}} in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it  contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:
  # mdadm --examine --scan > /mnt/etc/mdadm.conf
+
  # mdadm --examine --scan > /etc/mdadm.conf
 +
 
 +
{{Note|Read the note in the [[RAID#Update configuration file|Update configuration file]] section about ensuring that you write to the correct {{ic|mdadm.conf}} file from within the installer.}}
  
 
=== Prepare hard drive ===
 
=== Prepare hard drive ===
Follow the directions outlined the  [[Beginners' Guide#Installation|Installation]] section until you reach the ''Prepare Hard Drive'' section. Skip the first two steps and navigate to the ''Manually Configure block devices, filesystems and mountpoints'' page. Remember to only configure the PVs (e.g. {{filename|/dev/mapper/VolGroupArray-lvhome}}) and '''not''' the actual disks (e.g. {{filename|/dev/sda1}}).
+
Follow the directions outlined the  [[Beginners' Guide#Installation|Installation]] section until you reach the ''Prepare Hard Drive'' section. Skip the first two steps and navigate to the ''Manually Configure block devices, filesystems and mountpoints'' page. Remember to only configure the PVs (e.g. {{ic|/dev/mapper/VolGroupArray-lvhome}}) and '''not''' the actual disks (e.g. {{ic|/dev/sda1}}).
 +
 
 +
{{warning|{{ic|mkfs.xfs}} will not align the chunk size and stripe size for optimum performance (see: [http://www.linuxpromagazine.com/Issues/2009/108/RAID-Performance Optimum RAID]).}}
  
 
=== Configure system ===
 
=== Configure system ===
Line 229: Line 229:
 
==== /etc/mkinitcpio.conf ====
 
==== /etc/mkinitcpio.conf ====
 
[[mkinitcpio]] can use a hook to assemble the arrays on boot. For more information see [[mkinitcpio#Using RAID|mkinitpcio Using RAID]].
 
[[mkinitcpio]] can use a hook to assemble the arrays on boot. For more information see [[mkinitcpio#Using RAID|mkinitpcio Using RAID]].
# Add the {{codeline|dm_mod}} module to the {{codeline|MODULES}} list in {{filename|/etc/mkinitcpio.conf}}.
+
# Add the {{ic|dm_mod}} module to the {{ic|MODULES}} list in {{ic|/etc/mkinitcpio.conf}}.
# Add the {{codeline|mdadm}} and {{codeline|lvm2}} hooks to the {{codeline|HOOKS}} list in {{filename|/etc/mkinitcpio.conf}} after {{codeline|udev}}.
+
# Add the {{ic|mdadm_udev}} and {{ic|lvm2}} hooks to the {{ic|HOOKS}} list in {{ic|/etc/mkinitcpio.conf}} after {{ic|udev}}.
  
 
=== Conclusion ===
 
=== Conclusion ===
Check the progress of the synchronization by returning to the TTY3: {{keypress|ALT}} + {{keypress|F3}}
 
 
 
Once it is complete you can safely reboot your machine:
 
Once it is complete you can safely reboot your machine:
 
  # reboot
 
  # reboot
  
=== Install Grub on the Alternate Boot Drives===
+
=== Install the bootloader on the Alternate Boot Drives===
 +
Once you have successfully booted your new system for the first time, you will want to install the bootloader onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from any of the remaining drives (e.g. by switching the boot order in the BIOS).  The method depends on the bootloader system you're using:
  
Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:
+
==== Syslinux ====
 +
Log in to your new system as root and do:
 +
# /usr/sbin/syslinux-install_update -iam
 +
 
 +
Syslinux will deal with installing the bootloader to the MBR on each of the members of the RAID array:
 +
Detected RAID on /boot - installing Syslinux with --raid
 +
Syslinux install successful
 +
 
 +
Attribute Legacy Bios Bootable Set - /dev/sda1
 +
Attribute Legacy Bios Bootable Set - /dev/sdb1
 +
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sda
 +
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sdb
 +
 
 +
==== Grub Legacy ====
 +
Log in to your new system as root and do:
 
  # grub
 
  # grub
 
  grub> device (hd0) /dev/sdb
 
  grub> device (hd0) /dev/sdb
Line 259: Line 272:
  
 
== Management ==
 
== Management ==
 +
For further information on how to maintain your software RAID or LVM review the [[RAID]] and [[LVM]] aritcles.
  
For LVM management, please have a look at [[LVM]]
+
== Additional Resources ==
 
+
== Mounting from a Live CD ==
+
 
+
If you want to mount your RAID partition from a Live CD, use
+
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
+
 
+
(or whatever mdX and drives apply to you)
+
 
+
{{Note | Live CDs like [http://www.sysresccd.org/Main_Page SystemrescueCD] assemble the RAID arrays automatically at boot time if you used the partition type fd at the install of the array)}}
+
 
+
==Removing device, stop using the array==
+
 
+
You can remove a device from the array after you mark it as faulty.
+
 
+
# mdadm --fail /dev/md0 /dev/sdxx
+
 
+
Then you can remove it from the array.
+
 
+
# mdadm -r /dev/md0 /dev/sdxx
+
 
+
Remove device permanently (for example in the case you want to use it individally from now on).
+
Issue the two commands described above then:
+
 
+
# mdadm --zero-superblock /dev/sdxx
+
 
+
After this you can use the disk as you did before creating the array.
+
 
+
{{Warning | If you reuse the removed disk without zeroing the superblock you will '''LOSE''' all your data next boot. (After mdadm will try to use it as the part of the raid array). '''DO NOT''' issue this command on linear or RAID0 arrays or you will '''LOSE''' all your data on the raid array. }}
+
 
+
Stop using an array:
+
# Umount target array
+
# Repeat the three command described in the beginning of this section on each device.
+
# Stop the array with: <code>mdadm --stop /dev/md0</code>
+
# Remove the corresponding line from /etc/mdadm.conf
+
 
+
== Adding a device to the array ==
+
Adding new devices with mdadm can be done on a running system with the devices mounted.
+
Partition the new device "/dev/sdx" using the same layout as one of those already in the arrays "/dev/sda".
+
# sfdisk -d /dev/sda > table
+
# sdfisk /dev/sdx < table
+
 
+
Assemble the RAID arrays if they are not already assembled:
+
# mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
+
# mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
+
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
+
 
+
First, add the new device as a Spare Device to all of the arrays. We will assume you have followed the guide and use separate arrays for /boot RAID 1 (/dev/md1), swap RAID 1 (/dev/md2) and root RAID 5 (/dev/md0).
+
# mdadm --add /dev/md1 /dev/sdx1
+
# mdadm --add /dev/md2 /dev/sdx2
+
# mdadm --add /dev/md0 /dev/sdx3
+
 
+
This should not take long for mdadm to do. Check the progress with:
+
# cat /proc/mdstat
+
 
+
Check that the device has been added with the command:
+
# mdadm --misc --detail /dev/md0
+
 
+
It should be listed as a Spare Device.
+
 
+
Tell mdadm to grow the arrays from 3 devices to 4 (or however many devices you want to use):
+
# mdadm --grow -n 4 /dev/md1
+
# mdadm --grow -n 4 /dev/md2
+
# mdadm --grow -n 4 /dev/md0
+
 
+
This will probably take several hours. You need to wait for it to finish before you can continue. Check the progress in /proc/mdstat. The RAID 1 arrays should automatically sync /boot and swap but you need to install Grub on the MBR of the new device manually. [[Installing_with_Software_RAID_or_LVM#Install_Grub_on_the_Alternate_Boot_Drives]]
+
 
+
The rest of this guide will explain how to resize the underlying LVM and filesystem on the RAID 5 array.
+
{{Note|I am not sure if this can be done with the volumes mounted and will assume you are booting from a live-cd/usb}}
+
 
+
If you are have encrypted your LVM volumes with LUKS, you need resize the LUKS volume first.  Otherwise, ignore this step.
+
# cryptsetup luksOpen /dev/md0 cryptedlvm
+
# cryptsetup resize cryptedlvm
+
 
+
Activate the LVM volume groups:
+
# vgscan
+
# vgchange -ay
+
 
+
Resize the LVM Physical Volume /dev/md0 (or e.g. /dev/mapper/cryptedlvm if using LUKS) to take up all the available space on the array. You can list them with the command "pvdisplay".
+
# pvresize /dev/md0
+
 
+
Resize the Logical Volume you wish to allocate the new space to. You can list them with "lvdisplay". Assuming you want to put it all to your /home volume:
+
# lvresize -l +100%FREE /dev/array/home
+
 
+
To resize the filesystem to allocate the new space use the appropriate tool. If using ext2 you can resize a mounted filesystem with ext2online. For ext3 you can use resize2fs or ext2resize but not while mounted.
+
 
+
You should check the filesystem before resizing.
+
# e2fsck -f /dev/array/home
+
# resize2fs /dev/array/home
+
 
+
Read the manuals for lvresize and resize2fs if you want to customize the sizes for the volumes.
+
 
+
==Troubleshooting==
+
If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in /boot/grub/menu.lst accordingly. This is what happened to me anyway.
+
 
+
===Recovering from a broken or missing drive in the raid===
+
You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to fore the raid to still turn on even with one disk short. Type this (change where needed):
+
# mdadm --manage /dev/md0 --run
+
 
+
Now you should be able to mount it again with something like this (if you had it in fstab):
+
# mount /dev/md0
+
 
+
Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Partition_the_Hard_Drives. Once that is done you can add the new disk to the raid by doing:
+
# mdadm --manage --add /dev/md0 /dev/sdd1
+
 
+
If you type:
+
# cat /proc/mdstat
+
you probably see that the raid is now active and rebuilding.
+
 
+
You also might want to update your /etc/mdadm.conf file by typing:
+
# mdadm --examine --scan > /etc/mdadm.conf
+
 
+
That should be about all steps required to recover your raid. It certainly worked for me when i had lost a dive due to a partition table corruption.
+
 
+
== Benchmarking ==
+
There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.
+
 
+
[http://sourceforge.net/projects/tiobench/ Tiobench] specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.
+
 
+
[http://www.coker.com.au/bonnie++/ Bonnie++] tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed [http://www.coker.com.au/bonnie++/zcav/ ZCAV] program tests the performance of different zones of a hard drive without writing any data to the disk.
+
 
+
{{codeline|hdparm}} should '''NOT''' be used to benchmark a RAID, because it provides very inconsistent results.
+
 
+
== Additional Resources==
+
 
+
=== LVM ===
+
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org
+
 
+
=== Software RAID ===
+
* [http://en.gentoo-wiki.com/wiki/RAID/Software RAID/Software] on the [http://en.gentoo-wiki.com/wiki/Main_Page Gentoo Wiki]
+
* [http://en.gentoo-wiki.com/wiki/Software_RAID_Install Software RAID Install] on the [http://en.gentoo-wiki.com/wiki/Main_Page Gentoo Wiki]
+
* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1] and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2] in the [http://www.gentoo.org/doc/en/index.xml Gentoo Linux Docs]
+
* [http://raid.wiki.kernel.org/index.php/Linux_Raid Linux RAID wiki entry] on [http://www.kernel.org/ The Linux Kernel Archives]
+
* [http://linux-101.org/howto/arch-linux-software-raid-installation-guide Arch Linux software RAID installation guide] on [http://linux-101.org/ Linux 101]
+
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-raid.html Chapter 15: Redundant Array of Independent Disks (RAID)] of Red Hat Enterprise Linux 6 Documentation
+
* [http://tldp.org/FAQ/Linux-RAID-FAQ/x37.html Linux-RAID FAQ] on the [http://tldp.org/ Linux Documentation Project]
+
==== Encryption ====
+
* [http://www.shimari.com/dm-crypt-on-raid/ Linux/Fedora: Encrypt /home and swap over RAID with dm-crypt] by Justin Wells
+
 
+
=== RAID & LVM ===
+
 
* [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Setup Arch Linux on top of raid, LVM2 and encrypted partitions] by Yannick Loth
 
* [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Setup Arch Linux on top of raid, LVM2 and encrypted partitions] by Yannick Loth
 
* [http://stackoverflow.com/questions/237434/raid-verses-lvm RAID vs. LVM] on [[Wikipedia:Stack Overflow|Stack Overflow]]
 
* [http://stackoverflow.com/questions/237434/raid-verses-lvm RAID vs. LVM] on [[Wikipedia:Stack Overflow|Stack Overflow]]
Line 406: Line 281:
 
* [http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide]
 
* [http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide]
  
=== Forums threads ===
+
'''Forum threads'''
* 2011-08-28 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=125445 GRUB and GRUB2]
+
* 2011-09-08 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=126172 LVM & RAID (1.2 metadata) + SYSLINUX]
* 2011-08-03 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=123698 Can't install grub2 on software RAID]
+
* 2011-07-29 - Gentoo - [http://forums.gentoo.org/viewtopic-t-888624-start-0.html Use RAID metadata 1.2 in boot and root partition]
+
 
* 2011-04-20 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?pid=965357 Software RAID and LVM questions]
 
* 2011-04-20 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?pid=965357 Software RAID and LVM questions]
 
* 2011-03-12 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=114965 Some newbie questions about installation, LVM, grub, RAID]
 
* 2011-03-12 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=114965 Some newbie questions about installation, LVM, grub, RAID]

Revision as of 15:01, 17 March 2013

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary link Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary end

The combination of RAID and LVM provides numerous features with few caveats compared to just using RAID.

Introduction

Warning: Be sure to review the RAID article and be aware of all applicable warnings, particularly if you select RAID5.

Although RAID and LVM may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as /dev/sda, /dev/sdb, and /dev/sdc. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.

Tip: It is good practice to ensure that only the drives involved in the installation are attached while performing the installation.
LVM Logical Volumes / /var /swap /home
LVM Volume Groups /dev/VolGroupArray
RAID Arrays /dev/md0 /dev/md1
Physical Partitions /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sda2 /dev/sdb2 /dev/sdc2
Hard Drives /dev/sda /dev/sdb /dev/sdc

Swap space

Note: If you want extra performance, just let the kernel use distinct swap partitions as it does striping by default.

Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory.

MBR vs. GPT

Template:Wikipedia The widespread Master Boot Record (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. GUID Partition Table (GPT) is a new standard for the layout of the partition table based on the UEFI specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating an additional partition at the beginning of each disk for GRUB2 (see: GPT specific instructions).

Boot loader

This tutorial will use SYSLINUX instead of GRUB2. GRUB2 when used in conjunction with GPT requires an additional BIOS Boot Partition. Additionally, the 2011.08.19 Arch Linux installer does not support GRUB2.

GRUB2 supports the default style of metadata currently created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio. SYSLINUX only supports version 1.0, and therefore requires the --metadata=1.0 option.

Some boot loaders (e.g. GRUB, LILO) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option --metadata=0.90 to the /boot array during RAID installation.

Installation

Obtain the latest installation media and boot the Arch Linux installer as outlined in the Beginners' Guide, or alternatively, in the Official Arch Linux Install Guide. Follow the directions outlined there until you have configured your network.

Load kernel modules

Enter another TTY terminal by typing Template:Keypress+Template:Keypress. Load the appropriate RAID (e.g. raid0, raid1, raid5, raid6, raid10) and LVM (i.e. dm-mod) modules. The following example makes use of RAID1 and RAID5.

# modprobe raid1
# modprobe raid5
# modprobe dm-mod

Prepare the hard drives

Note: If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to Activate existing RAID devices and LVM volumes. This can be achieved with alternative partitioning software (see: Article).

Each hard drive will have a 100MB /boot partition, 2048MB /swap partition, and a / partition that takes up the remainder of the disk.

The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the /boot array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).

Install gdisk

Since most disk partitioning software does not support GPT (i.e. fdisk, sfdisk) you will need to install gptfdisk to set the partition type of the boot loader partitions.

Update the pacman database:

$ pacman-db-upgrade

Refresh the package list:

$ pacman -Syy

Install gptfdisk:

$ pacman -S gdisk

Partition hard drives

We will use gdisk to create three partitions on each of the three hard drives (i.e. /dev/sda, /dev/sdb, /dev/sdc):

   Name        Flags      Part Type  FS Type          [Label]        Size (MB)
-------------------------------------------------------------------------------
   sda1        Boot        Primary   linux_raid_m                       100.00  # /boot
   sda2                    Primary   linux_raid_m                      2000.00  # /swap
   sda3                    Primary   linux_raid_m                     97900.00  # /

Open gdisk with the first hard drive:

$ gdisk /dev/sda

and type the following commands at the prompt:

  1. Add a new partition: Template:Keypress
  2. Select the default partition number: Template:Keypress
  3. Use the default for the first sector: Template:Keypress
  4. For sda1 and sda2 type the appropriate size in MB (i.e. +100MB and +2048M). For sda3 just hit Template:Keypress to select the remainder of the disk.
  5. Select Linux RAID as the partition type: fd00
  6. Write the table to disk and exit: Template:Keypress

Repeat this process for /dev/sdb and /dev/sdc or use the alternate sgdisk method below. You may need to reboot to allow the kernel to recognize the new tables.

Note: Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but the redundant partition will be in multiples of the size of the smallest partition, leaving the unallocated space to waste.

Clone partitions with sgdisk

If you are using GPT, then you can use sgdisk to clone the partition table from /dev/sda to the other two hard drives:

$ sgdisk --backup=table /dev/sda
$ sgdisk --load-backup=table /dev/sdb
$ sgdisk --load-backup=table /dev/sdc

RAID installation

After creating the physical partitions, you are ready to setup the /boot, /swap, and / arrays with mdadm. It is an advanced tool for RAID management that will be used to create a /etc/mdadm.conf within the installation environment.

Create the / array at /dev/md0:

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3

Create the /swap array at /dev/md1:

# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2
Note: If you plan on installing a boot loader that does not support the 1.x version of RAID metadata make sure to add the --metadata=0.90 option to the following command.

Create the /boot array at /dev/md2:

# mdadm --create /dev/md2 --level=1 --raid-devices=3 --metadata=1.0 /dev/sd[abc]1

Synchronization

Tip: If you want to avoid the initial resync with new hard drives add the --assume-clean flag.

After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of /proc/mdstat ten times per second with:

# watch -n .1 cat /proc/mdstat
Tip: Follow the synchronization in another TTY terminal by typing Template:Keypress + Template:Keypress and then execute the above command.

Further information about the arrays is accessible with:

# mdadm --misc --detail /dev/md[012] | less

Once synchronization is complete the State line should read clean. Each device in the table at the bottom of the output should read spare or active sync in the State column. active sync means each device is actively in the array.

Note: Since the RAID synchronization is transparent to the file-system you can proceed with the installation and reboot your computer when necessary.

Data Scrubbing

It is good practice to regularly run data scrubbing to check for and fix errors, especially on RAID 5 type volumes. See the Gentoo Wiki on Data Scrubbing for details.

In short, the following will trigger a data scrub:

# echo check >> /sys/block/mdX/md/sync_action

The progress can be watched with:

# watch -n 1 cat /proc/mdstat

To stop a currently running data scrub safely:

# echo idle >> /sys/block/mdX/md/sync_action

It is a good idea to set up a cron job as root to schedule a weekly scrub.

LVM installation

This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. /, /var, /home). If you did not understand that make sure you read the LVM Introduction section.

Create physical volumes

Make the RAIDs accessible to LVM by converting them into physical volumes (PVs):

# pvcreate /dev/md0
Note: This might fail if you are creating PVs on an existing Volume Group. If so you might want to add -ff option.

Confirm that LVM has added the PVs with:

# pvdisplay

Create the volume group

Next step is to create a volume group (VG) on the PVs.

Create a volume group (VG) with the first PV:

# vgcreate VolGroupArray /dev/md0

Confirm that LVM has added the VG with:

# vgdisplay

Create logical volumes

Now we need to create logical volumes (LVs) on the VG, much like we would normally prepare a hard drive. In this example we will create separate /, /var, /swap, /home LVs. The LVs will be accessible as /dev/mapper/VolGroupArray-<lvname> or /dev/VolGroupArray/<lvname>.

Create a / LV:

# lvcreate -L 20G VolGroupArray -n lvroot

Create a /var LV:

# lvcreate -L 15G VolGroupArray -n lvvar
Note: If you would like to add the swap space to the LVM create a /swap LV with the -C y option, which creates a contiguous partition, so that your swap space does not get partitioned over one or more disks nor over non-contiguous physical extents:
# lvcreate -C y -L 2G VolGroupArray -n lvswap

Create a /home LV that takes up the remainder of space in the VG:

# lvcreate -l +100%FREE VolGroupArray -n lvhome

Confirm that LVM has created the LVs with:

# lvdisplay
Tip: You can start out with relatively small logical volumes and expand them later if needed. For simplicity, leave some free space in the volume group so there is room for expansion.

Update RAID configuration

Since the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:

# mdadm --examine --scan > /etc/mdadm.conf
Note: Read the note in the Update configuration file section about ensuring that you write to the correct mdadm.conf file from within the installer.

Prepare hard drive

Follow the directions outlined the Installation section until you reach the Prepare Hard Drive section. Skip the first two steps and navigate to the Manually Configure block devices, filesystems and mountpoints page. Remember to only configure the PVs (e.g. /dev/mapper/VolGroupArray-lvhome) and not the actual disks (e.g. /dev/sda1).

Warning: mkfs.xfs will not align the chunk size and stripe size for optimum performance (see: Optimum RAID).

Configure system

Warning: Follow the steps in the LVM Important section before proceeding with the installation.

/etc/mkinitcpio.conf

mkinitcpio can use a hook to assemble the arrays on boot. For more information see mkinitpcio Using RAID.

  1. Add the dm_mod module to the MODULES list in /etc/mkinitcpio.conf.
  2. Add the mdadm_udev and lvm2 hooks to the HOOKS list in /etc/mkinitcpio.conf after udev.

Conclusion

Once it is complete you can safely reboot your machine:

# reboot

Install the bootloader on the Alternate Boot Drives

Once you have successfully booted your new system for the first time, you will want to install the bootloader onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from any of the remaining drives (e.g. by switching the boot order in the BIOS). The method depends on the bootloader system you're using:

Syslinux

Log in to your new system as root and do:

# /usr/sbin/syslinux-install_update -iam

Syslinux will deal with installing the bootloader to the MBR on each of the members of the RAID array:

Detected RAID on /boot - installing Syslinux with --raid
Syslinux install successful
Attribute Legacy Bios Bootable Set - /dev/sda1
Attribute Legacy Bios Bootable Set - /dev/sdb1
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sda
Installed MBR (/usr/lib/syslinux/gptmbr.bin) to /dev/sdb

Grub Legacy

Log in to your new system as root and do:

# grub
grub> device (hd0) /dev/sdb
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/sdc
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Archive your Filesystem Partition Scheme

Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the sfdisk tool and the following steps:

# mkdir /etc/partitions
# sfdisk --dump /dev/sda >/etc/partitions/disc0.partitions
# sfdisk --dump /dev/sdb >/etc/partitions/disc1.partitions
# sfdisk --dump /dev/sdc >/etc/partitions/disc2.partitions

Management

For further information on how to maintain your software RAID or LVM review the RAID and LVM aritcles.

Additional Resources

Forum threads