Difference between revisions of "Installing with Software RAID or LVM"

From ArchWiki
Jump to: navigation, search
m (Management)
(General Approach: Changed references from /dev/hdXY to /dev/sdXY)
Line 37: Line 37:
 
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.
 
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.
  
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.
+
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/sda, /dev/sdb, and /dev/sdc, and that the cdrom drive is /dev/cdrom.
  
 
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for redundancy, so that your machine will not lose its swap state even if 1 or 2 drives fail.
 
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for redundancy, so that your machine will not lose its swap state even if 1 or 2 drives fail.

Revision as of 23:36, 2 October 2009


Disclaimer

Warning: Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!

Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.

RAID

RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.

RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.

RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.

ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!

LVM

LVM (Logical Volume Management) makes use of the device-mapper feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.

This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.

Note that we're not using LVM for the boot partition (because of the bootloader problem).

CAVEATS

Security (redundancy)

Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So make backups. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.

General Approach

For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.

In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/sda, /dev/sdb, and /dev/sdc, and that the cdrom drive is /dev/cdrom.

We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for redundancy, so that your machine will not lose its swap state even if 1 or 2 drives fail.

Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.

Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of two of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.

Get the Arch Installer CD

Please note that in order to use LVM, you need the lvm2 and dev-mapper packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer does not contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.

Outline

Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.

  1. Boot the Installer CD
  2. Partition the Hard Drives
  3. Create the RAID Redundant Partitions
  4. Create and Mount the Main Filesystems
  5. Setup LVM and Create the / LVM Volume
  6. Install and Configure Arch
  7. Install Grub on the Primary Hard Drive
  8. Unmount Filesystems and Reboot
  9. Install Grub on the Alternate Boot Drives
  10. Archive your Filesystem Partition Scheme

Procedure

Boot the Installer CD

First, load all your drives in the machine. Then boot the Arch Linux 0.7 Full installation CD.

At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.

So far, this is easy. Don't worry, it gets harder.

Partition the Hard Drives

If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to Activate exsiting RAID devices and LVM volumes.

We'll use cfdisk to do this partitioning. We want to create 3 partitions on each of the three drive:

Partition 1 (/boot): 100MB, type FD, bootable
Partition 2 (swap): 2048MB, type FD
Partition 3 (LVM): <Rest of the drive>, type FD

Note that in general, in cfdisk, you can use the first letter of each Bracketed Option to select it; however, this is not true for the Write command, you have to hold SHIFT as well to select it.

First run:

# cfdisk /dev/hda

Create each partition in order:

  1. Select New.
  2. Hit Enter to make it a Primary partition.
  3. Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.
  4. Hit Enter to choose to place the partition at the Beginning.
  5. Select Type, hit enter to see the second page of the list, and then type fd for the Linux RAID Autodetect type.
  6. For Partition 1 on each drive, select Bootable.
  7. Hit down arrow (selecting the remaining free space) to go on to the next partition to be created.

When you're done, select Write, and confirm y-e-s that you want to write the partition information to disk.

Then select Quit.

Repeat this for the other two drives:

# cfdisk /dev/hdb
# cfdisk /dev/hdc

Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it will work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.

Load the RAID Modules

Before using mdadm, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like "cannot insert md-mod.ko: File exists". Busybox's modprobe can be a little slow sometimes.

# modprobe raid1
# modprobe raid5

Create the RAID Redundant Partitions

Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is mdadm.

To create /dev/md0 (/):

# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3

To create /dev/md1 (/boot):

# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1

To create /dev/md2 (swap):

# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2

At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:

# cat /proc/mdstat

You can also get particular information about, say, the root partition by typing:

# mdadm --misc --detail /dev/md0

You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.

Setup LVM and Create the / LVM Volume

This is where you create the LVM volumes. LVM works with abstract layers, check out Lvm and/or it's documentation to discover more. What you will be doing in short:

  • Turn block devices (e.g. /dev/hda1 or /dev/md0) into Physical Volume(s) that can be used by LVM
  • Create a Volume Group consisting of Physical Volume(s)
  • Create Logical Volume(s) within the Volume Group

Note: If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.

To mount the sysfs partition, do:

# mkdir /sys
# mount -t sysfs none /sys

Let's get started:

Make sure that the device-mapper module is loaded:

# modprobe dm-mod

Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (/dev/md0), but LVM doesn't know this, or really care. Do:

# pvcreate /dev/md0

This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.

LVM should report back that it has added the Physical Volume. You can confirm this with:

# pvdisplay

Now it's time to create a Volume Group (which I'll call array) which has control over the LVM Physical Volume we created. Do:

# vgcreate array /dev/md0

LVM should report that it has created the Volume Group array. You can confirm this with:

# vgdisplay

Next, we create a Logical Volume called root in Volume Group array which is 50GB in size:

# lvcreate --size 50G --name root array

LVM should report that it created the Logical Volume root. You can confirm this with:

# lvdisplay

The LVM volume should now be available as /dev/mapper/array-root. Or something similar, LVM will also be able to tell you which when you issue the display command.

Activate existing RAID devices and LVM volumes

If you already have RAID partitions created on your system and you've also set up LVM and all you want is enabling them follow this simple procedure. This might come in handy if you're switching distros and don't want to lose data in /home for example.

First you need to enable RAID support. RAID1 and RAID5 in this case.

modprobe raid1
modprobe raid5

Activate RAID devices: md1 for /boot and md0 for LVM where two logical volumes will reside.

mdadm --assemble /dev/md0 /dev/hda3 /dev/hdb3 /dev/hdc3
mdadm --assemble /dev/md1 /dev/hda1 /dev/hdb1 /dev/hdc1

RAID devices should now be enabled. Check /proc/mdstat.

If you haven't loaded kernel LVM support do so now.

modprobe dm-mod

Startup of LVM requires just the following two commands:

vgscan
vgchange -ay

You can now jump to [3] Set Filesystem Mountpoints in your menu based setup and mount created partitions as needed.

Create and Mount the Filesystems

When you are using a setup that is newer then 2008.03; this step is optional!

I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.

To create /boot:

# mkreiserfs /dev/md1

To create swap space:

# mkswap /dev/md2

To create /:

# mkreiserfs /dev/array/root

Now, mount the boot and root partitions where the installer expects them:

# mount /dev/array/root /mnt
# mkdir /mnt/boot
# mount /dev/md1 /mnt/boot

We've created all our filesystems! And we're ready to install the OS!

Install and Configure Arch

This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.

Note: Because the installer builds the initrd using /etc/mdadm.conf, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automaticly for you. So let's delete the original and have mdadm create you a new one with the currect setup:

rm /etc/mdadm.conf
mdadm –D –-scan >> /etc/mdadm.conf

Now you can continue using the installer to set-up the system and install the packages you need. Here's the walkthrough:

  • Type /arch/setup to launch the main installer.
  • Select < OK > at the opening screen.
  • Select 1 CD_ROM to install from CD-ROM (or 2 FTP if you have a local Arch mirror on FTP).
  • If you have skipped the optional step (Create and Mount the Filesystems) above, and haven't created a fileystem yet, select 1 Prepare Hard Drive > 3 Set Filesystem Mountpoints and create your filesystems and mountpoints here
  • Now at the main menu, Select 2 Select Packages and select all the packages in the base category, as well as the mdadm and lvm2 packages from the system category. Note: mdadm & lvm2 are included in base category since arch-base-0.7.2.
  • Select 3 Install Packages. This will take a little while.
  • Select 4 Configure System:

Add the mdadm and lvm2 hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after). See Configuring mkinitpcio using RAID for more details.

Edit your /etc/rc.conf. It should contain a USELVM entry already, which you should change to:

USELVM="yes"

Please Note: The rc.sysinit script that parses the USELVM variable entry will accept either yes or YES, however it will not accept mixed case. Please be sure you've got your capitalization correct.

Edit your /etc/fstab to contain the entries:

/dev/array/root         /       reiserfs        defaults        0       1
/dev/md2                swap    swap            defaults        0       0
/dev/md1                /boot   reiserfs        defaults        0       0

At this point, make any other configuration changes you need to other files.

Then exit the configuration menu.

Since you will not be installing Grub from the installer, select 7 Exit Install to leave the installer program.


Old style:

Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:

 # Example with /dev/array/root for / & /dev/md1 for /boot:
   kernel /kernel26 root=/dev/array/root ro  md=1,/dev/hda1,/dev/hdb1,/dev/hdc1 md=0,/dev/hda3,/dev/hdb3,/dev/hdc3


Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).

The arrays can be assembled on boot by the kernel using that hook and the contents of /etc/mdadm.conf, which is included in the initrd image when it's build. (See Configuring mkinitpcio using RAID )

An example of a GRUB boot configuration for booting of a RAIDed root like this:

# (0) Arch Linux
title  Arch Linux
root   (hd0,0)
kernel /vmlinuz26 root=/dev/md0 ro
initrd /kernel26.img

Install Grub on the Primary Hard Drive (and save the RAID config)

This is the last and final step before you have a bootable system!

As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in /etc/mdadm.conf so it can be re-assembled automatically after we reboot.

Copy the GRUB files into place and get into our chroot:

# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
# sync
# mount -o bind /dev /mnt/dev
# mount -t proc none /mnt/proc
# chroot /mnt /bin/bash

At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing reset at the prompt.

Once you've got console echo back on, type:

# grub

After a short wait while grub does some looking around, it should come back with a grub prompt. Do:

grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.

The default /etc/mdadm.conf on your chrooted system should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of mdadm.conf.

# mdadm -D --scan >>/etc/mdadm.conf

That's it. You can exit your chroot now by hitting CTRL-D or typing exit.

Reboot

The hard part is all over! Now remove the CD from your CD-ROM drive, and type:

# reboot

Install Grub on the Alternate Boot Drives

Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:

# grub
grub> device (hd0) /dev/hdb
grub> root (hd0,0)
grub> setup (hd0)
grub> device (hd0) /dev/hdc
grub> root (hd0,0)
grub> setup (hd0)
grub> quit

Archive your Filesystem Partition Scheme

Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the sfdisk tool and the following steps:

# mkdir /etc/partitions
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions

Management

For LVM management, please have a look at LVM

Mounting from a Live CD

If you want to mount your RAID partition from a Live CD, use

mdadm --assemble /dev/md0 /dev/hda3 /dev/hdb3 /dev/hdc3

(or whatever mdX and drives apply to you)

Conclusion

You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!

Troubleshooting

If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in /boot/grub/menu.lst accordingly. This is what happened to me anyway.

Credits

This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.

Thanks to all who have contributed information and suggestions! This includes:

  • Carl Chave
  • Guillaume Darbonne

Additional Resources