Software RAID and LVM
|Summary help replacing me|
|This article will provide an example of how to install and configure Arch Linux with a software RAID or Logical Volume Manager (LVM).|
|Installing with Fake RAID|
|Convert a single drive system to RAID|
- 1 Introduction
- 2 Installation
- 2.1 Load kernel modules
- 2.2 Prepare the hard drives
- 2.3 RAID installation
- 2.4 LVM installation
- 2.5 Update RAID configuration
- 2.6 Prepare hard drive
- 2.7 Configure system
- 2.8 Conclusion
- 2.9 Install Grub on the Alternate Boot Drives
- 2.10 Archive your Filesystem Partition Scheme
- 3 Management
- 4 Additional Resources
Although RAID and LVM may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as Template:Filename, Template:Filename, and Template:Filename. If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.
|LVM Logical Volumes||Template:Codeline||Template:Codeline||Template:Codeline||Template:Codeline|
|LVM Volume Groups||Template:Filename|
Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory.
MBR vs. GPT
Template:Wikipedia The widespread Master Boot Record (MBR) partitioning scheme, dating from the early 1980s, imposed limitations which affected the use of modern hardware. GUID Partition Table (GPT) is a new standard for the layout of the partition table based on the UEFI specification derived from Intel. Although GPT provides a significant improvement over a MBR, it does require the additional step of creating an additional partition at the beginning of each disk for GRUB2 (see: GPT specific instructions).
This tutorial will use SYSLINUX instead of GRUB2. GRUB2 when used in conjunction with GPT requires an additional BIOS Boot Partition. Additionally, the 2011.08.19 Arch Linux installer does not support GRUB2.
GRUB2 support the current default style of metadata created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio. SYSLINUX only supports version 1.0, and therefore requires the Template:Codeline option.
Some boot loaders (e.g. GRUB, LILO) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option Template:Codeline to the Template:Codeline array during RAID installation.
Obtain the latest installation media and boot the Arch Linux installer as outlined in the Beginners' Guide, or alternatively, in the Official Arch Linux Install Guide. Follow the directions outlined there until you have configured your network.
Load kernel modules
Enter another TTY terminal by typing Template:Keypress+Template:Keypress. Load the appropriate RAID (e.g. Template:Filename, Template:Filename, Template:Filename, Template:Filename, Template:Filename) and LVM (i.e. Template:Filename) modules. The following example makes use of RAID1 and RAID5.
# modprobe raid1 # modprobe raid5 # modprobe dm-mod
Prepare the hard drives
The boot partition must be RAID1, because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the Template:Codeline array. Finally, the partition you boot from must not be striped (i.e. RAID5, RAID0).
Since most disk partitioning software does not support GPT (i.e. Template:Package Official, Template:Package Official) you will need to install Template:Package Official to set the partition type of the boot loader partitions.
Update the pacman database:
Refresh the package list:
$ pacman -Syy
Install Template:Package Official:
$ pacman -S gdisk
Partition hard drives
Name Flags Part Type FS Type [Label] Size (MB) ------------------------------------------------------------------------------- sda1 Boot Primary linux_raid_m 100.00 # /boot sda2 Primary linux_raid_m 2000.00 # /swap sda3 Primary linux_raid_m 97900.00 # /
Open Template:Codeline with the first hard drive:
$ gdisk /dev/sda
and type the following commands at the prompt:
- Add a new partition: Template:Keypress
- Select the default partition number: Template:Keypress
- Use the default for the first sector: Template:Keypress
- For Template:Filename and Template:Filename type the appropriate size in MB (i.e. Template:Codeline and Template:Codeline). For Template:Filename just hit Template:Keypress to select the remainder of the disk.
- Select Template:Codeline as the partition type: Template:Codeline
- Write the table to disk and exit: Template:Keypress
Clone partitions with sgdisk
$ sgdisk --backup=table /dev/sda $ sgdisk --load-backup=table /dev/sdb $ sgdisk --load-backup=table /dev/sdc
After creating the physical partitions, you are ready to setup the Template:Codeline, Template:Codeline, and Template:Codeline arrays with Template:Codeline. It is an advanced tool for RAID management that will be used to create a Template:Filename within the installation environment.
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sd[abc]3
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/sd[abc]2
# mdadm --create /dev/md2 --level=1 --raid-devices=3 --metadata=1.0 /dev/sd[abc]1
After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of Template:Filename ten times per second with:
# watch -n .1 cat /proc/mdstat
Further information about the arrays is accessible with:
# mdadm --misc --detail /dev/md | less
Once synchronization is complete the Template:Codeline line should read Template:Codeline. Each device in the table at the bottom of the output should read Template:Codeline or Template:Codeline in the Template:Codeline column. Template:Codeline means each device is actively in the array.
This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. Template:Codeline, Template:Codeline, Template:Codeline). If you did not understand that make sure you read the LVM Introduction section.
Create physical volumes
Make the RAIDs accessible to LVM by converting them into physical volumes (PVs):
# pvcreate /dev/md0
Confirm that LVM has added the PVs with:
Create the volume group
Next step is to create a volume group (VG) on the PVs.
Create a volume group (VG) with the first PV:
# vgcreate VolGroupArray /dev/md0
Confirm that LVM has added the VG with:
Create logical volumes
Now we need to create logical volumes (LVs) on the VG, much like we would normally prepare a hard drive. In this example we will create separate Template:Codeline, Template:Codeline, Template:Codeline, Template:Codeline LVs. The LVs will be accessible as Template:Filename or Template:Filename.
Create a Template:Codeline LV:
# lvcreate -L 20G VolGroupArray -n lvroot
Create a Template:Codeline LV:
# lvcreate -L 15G VolGroupArray -n lvvar
Create a Template:Codeline LV that takes up the remainder of space in the VG:
# lvcreate -l +100%FREE VolGroupArray -n lvhome
Confirm that LVM has created the LVs with:
Update RAID configuration
Since the installer builds the initrd using Template:Filename in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:
# mdadm --examine --scan > /mnt/etc/mdadm.conf
Prepare hard drive
Follow the directions outlined the Installation section until you reach the Prepare Hard Drive section. Skip the first two steps and navigate to the Manually Configure block devices, filesystems and mountpoints page. Remember to only configure the PVs (e.g. Template:Filename) and not the actual disks (e.g. Template:Filename).
- Add the Template:Codeline module to the Template:Codeline list in Template:Filename.
- Add the Template:Codeline and Template:Codeline hooks to the Template:Codeline list in Template:Filename after Template:Codeline.
Once it is complete you can safely reboot your machine:
Install Grub on the Alternate Boot Drives
Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:
# grub grub> device (hd0) /dev/sdb grub> root (hd0,0) grub> setup (hd0) grub> device (hd0) /dev/sdc grub> root (hd0,0) grub> setup (hd0) grub> quit
Archive your Filesystem Partition Scheme
Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the
sfdisk tool and the following steps:
# mkdir /etc/partitions # sfdisk --dump /dev/sda >/etc/partitions/disc0.partitions # sfdisk --dump /dev/sdb >/etc/partitions/disc1.partitions # sfdisk --dump /dev/sdc >/etc/partitions/disc2.partitions
- Setup Arch Linux on top of raid, LVM2 and encrypted partitions by Yannick Loth
- RAID vs. LVM on Stack Overflow
- What is better LVM on RAID or RAID on LVM? on Server Fault
- Managing RAID and LVM with Linux (v0.5) by Gregory Gulik
- Gentoo Linux x86 with Software Raid and LVM2 Quick Install Guide
- 2011-09-08 - Arch Linux - LVM & RAID (1.2 metadata) + SYSLINUX
- 2011-04-20 - Arch Linux - Software RAID and LVM questions
- 2011-03-12 - Arch Linux - Some newbie questions about installation, LVM, grub, RAID