Difference between revisions of "Installing with Software RAID or LVM"

From ArchWiki
Jump to: navigation, search
(Create redirect to Software RAID and LVM)
(content has been split and moved to Software RAID and LVM and RAID)
 
Line 1: Line 1:
 
#REDIRECT [[Software RAID and LVM]]
 
#REDIRECT [[Software RAID and LVM]]
[[Category:Getting and installing Arch (English)]]
 
[[Category:File systems (English)]]
 
 
{{i18n|Installing with Software RAID or LVM}}
 
{{Out of date}}
 
 
{{Article summary start}}
 
{{Article summary text|This article will provide an example of how to install and configure Arch Linux with a software RAID or Logical Volume Manager (LVM).}}
 
{{Article summary heading|Required software}}
 
{{Article summary link|Software|}}
 
{{Article summary heading|Related}}
 
{{Article summary wiki|Convert a single drive system to RAID}}
 
{{Article summary wiki|Installing with Fake RAID}}
 
{{Article summary end}}
 
 
This article applies to [[Arch Linux]] 2008.06, [http://www.archlinux.org/news/archlinux-200806-overlord/ Overlord]. It may not be applicable to previous or later releases of Arch Linux.
 
 
== Background ==
 
Although RAID and LVM may seem like analogous technologies they each present unique features.
 
 
=== RAID ===
 
{{Wikipedia|RAID}}
 
Redundant Array of Independent Disks (RAID) is designed to prevent data loss in the event of a hard disk failure. There are different [[Wikipedia:Standard RAID levels|levels of RAID]]. [[Wikipedia:Standard RAID levels#RAID 0|RAID 0]] (striping) is not really RAID at all, because it provides no redundancy. It does, however, provide a speed benefit. This example will utilize RAID 0 for swap, on the assumption that a desktop system is being used, where the speed increase is worth the possibility of system crash if one of your drives fails. On a server, a RAID 1 or RAID 5 array is more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.
 
 
[[Wikipedia:Standard RAID levels#RAID 1|RAID 1]] is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.
 
 
[[Wikipedia:Standard RAID levels#RAID 5|RAID 5]] requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.
 
 
==== Redundancy ====
 
{{Warning|Installing a system with RAID is a complex process that may destroy data. Be sure to backup all data before proceeding.}}
 
 
RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID will not protect your data. Therefore it is important to make backups. Whether you use tape drives, DVDs, CDROMs or another computer, keep an current copy of your data out of your computer (and preferably offsite). Get into the habit of making regular backups. You can also divide the data on your computer into current and archived directories. Then back up the current data frequently, and the archived data occasionally.
 
 
=== LVM ===
 
[[LVM]] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel to provide a system of partitions that is independent of the underlying disk's layout. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add/remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.
 
 
This is strictly an ease-of-management issue: it does not provide any addition security. However, it sits nicely with the other two technologies we are using.
 
 
Note that LVM is not used for the boot partition, because of the bootloader problem.
 
 
== Introduction ==
 
This article provides an example of how to install Arch Linux with a standard software RAID or LVM support. All configurations and settings are not referenced. Instead this article should provide a basic framework for you installation.
 
 
This example uses a computer with three similar IDE hard drives that are at least 80GB in size, installed as primary master, primary slave, and secondary master. A CD-ROM drive is installed as the secondary slave. The article assumes that the drives are accessible as {{filename|/dev/sda}}, {{filename|/dev/sdb}}, and {{filename|/dev/sdc}}, and that the CD-ROM drive is {{filename|/dev/cdrom}}.
 
 
{{note|It is also good practice to ensure that only the drives involved in the installation are attached while performing the installation.}}
 
 
We will create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it is so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it is for redundancy, so that your machine will not lose its swap state even if 1 or 2 drives fail.
 
 
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.
 
 
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.
 
 
{{note|In order to use LVM, you need the {{Codeline|lvm2}} and {{Codeline|dev-mapper}} packages installed, otherwise you will be unable to see any LVM partitions on reboot.}}
 
 
== Outline ==
 
 
Just to give you an idea of how all this will work, I will outline the steps. The details for these will be filled in below.
 
 
# Boot the Installer CD
 
# Partition the Hard Drives
 
# Create the RAID Redundant Partitions
 
# Create and Mount the Main Filesystems
 
# Setup LVM and Create the / (root) LVM Volume
 
# Install and Configure Arch
 
# Install Grub on the Primary Hard Drive
 
# Unmount Filesystems and Reboot
 
# Install Grub on the Alternate Boot Drives
 
# Archive your Filesystem Partition Scheme
 
 
== Procedure==
 
[[Beginners' Guide#Obtain the latest installation media|Obtain the latest installation media]] and [[Beginners' Guide#Boot Arch Linux Installer|boot the Arch Linux installer]] as outlined in the [[Beginners' Guide]], or alternatively, in the [[Official Arch Linux Install Guide#Pre-Installation|Official Arch Linux Install Guide]].
 
 
=== Partition the Hard Drives===
 
{{note|If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to [[Installing_with_Software_RAID_or_LVM#Activate_existing_RAID_devices_and_LVM_volumes|Activate existing RAID devices and LVM volumes]].}}
 
 
We will use <code>cfdisk</code> to do this partitioning. We want to create 3 partitions on each of the three hard drives (i.e. {{filename|/dev/sda}}, {{filename|/dev/sdb}}, {{filename|/dev/sdc}}):
 
 
    Name        Flags      Part Type  FS Type          [Label]        Size (MB)
 
-------------------------------------------------------------------------------
 
    sda1        Boot        Primary  linux_raid_m    [boot]            100.00
 
    sda2                    Primary  linux_raid_m    [swap]          2048.00
 
    sda3                    Primary  linux_raid_m    [raid]          77852.00
 
 
{{note|In {{Codeline|cfdisk}} you can use the first letter of each {{Codeline|[ Bracketed Option ]}} to select it, with the exception of the {{Codeline|[ Write ]}} command, which requires you also hold {{Codeline|SHIFT}} to select it.}}
 
 
Open {{Codeline|cfdisk}} with the first hard drive:
 
# cfdisk /dev/sda
 
 
and create the three partitions in order:
 
 
# Select {{Codeline|[ New ]}}.
 
# Hit {{Codeline|ENTER}} to make it a {{Codeline|Primary}} partition.
 
# For {{filename|sda1}} and {{filename|sda2}} type the appropriate size in MB (see above). For {{filename|sda3}} just hit {{Codeline|ENTER}} to select the remainder of the drive.
 
# Hit {{Codeline|ENTER}} to place the partition at the {{Codeline|Beginning}}.
 
# Select {{Codeline|[ Type ]}} and hit {{Codeline|ENTER}} to see the second page of the list, and then type <code>FD</code> for the Linux RAID Autodetect type.
 
# For {{filename|sda1}} select {{Codeline|[ Bootable ]}}.
 
# Hit the down arrow (selecting the remaining free space) to go on to the next partition to be created.
 
 
When you are done, select {{Codeline|[ Write ]}}, and confirm by typing {{Codeline|yes}} to write the partition table to the disk. When finished select {{Codeline|[ Quit ]}} and repeat this process for {{filename|/dev/sdb}} and {{filename|/dev/sdc}}.
 
 
Make sure to create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a RAID partition, it will work, but ''the redundant partition will be in multiples of the size of the smallest partition'', leaving the unallocated space to waste.
 
 
You can also use {{Codeline|sfdisk}} to clone the partition table from {{filename|/dev/sda}} to the other two hard drives. Dump the partition table from {{filename|/dev/sda}} into a file:
 
# sfdisk -d /dev/sda > table
 
 
and then write the partition table to the other two hard drives.
 
# sfdisk /dev/sdb < table
 
# sfdisk /dev/sdc < table
 
 
=== Load the RAID Modules ===
 
 
Before using <code>mdadm</code>, you need load the modules for the RAID levels you will be using.  In this example, we are using levels 1 and 5, so we will load those.  You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>.  Busybox's modprobe can be a little slow sometimes.
 
# modprobe raid1
 
# modprobe raid5
 
 
=== Create the RAID Redundant Partitions ===
 
 
Now that you have created all the physical partitions, you are ready to set up the three RAIDs. The tool you use to create RAID arrays is {{Codeline|mdadm}}.
 
 
Create the {{Codeline|/}} array at {{filename|/dev/md0}}:
 
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/sda3 /dev/sdb3 /dev/sdc3
 
 
Create the {{Codeline|/boot}} array at {{filename|/dev/md1}}:
 
# mdadm --create /dev/md1 --level=1 --raid-devices=3 --metadata=0.90 /dev/sda1 /dev/sdb1 /dev/sdc1
 
 
If you want to use GRUB 0.97 (default in the Arch Linux 2010.05 release) on RAID 1, you need to specify an older version of metadata than the default. Add the option "--metadata=0.90" to the above command. Otherwise Grub will respond with "Filesystem type unknown, partition type 0xfd" and refuse to install. This may also be necessary with [[GRUB2#Raid|GRUB2]].
 
 
Create the {{Codeline|swap}} array at {{filename|/dev/md2}}:
 
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/sda2 /dev/sdb2 /dev/sdc2
 
 
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:
 
# cat /proc/mdstat
 
 
You can also get particular information about, say, the root partition by typing:
 
# mdadm --misc --detail /dev/md0
 
 
You do not have to wait for synchronization to finish -- you may proceed with the installation while synchronization is still occurring. You can even reboot at the end of the installation with synchronization still going.
 
 
=== Setup LVM and Create the / (root) LVM Volume===
 
This is where you create the LVM volumes. LVM works with abstract layers, check out [[LVM|LVM]] and/or its documentation to discover more. What you will be doing in short:
 
* Turn block devices (e.g. /dev/sda1 or /dev/md0) into Physical Volume(s) that can be used by LVM
 
* Create a Volume Group consisting of Physical Volume(s)
 
* Create Logical Volume(s) within the Volume Group
 
 
'''Note:'''
 
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. Otherwise you can skip this mounting of sysfs, unless you run into trouble. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times.
 
 
To mount the sysfs partition, do:
 
# mkdir /sys
 
# mount -t sysfs none /sys
 
 
'''Let us get started:'''
 
 
Make sure that the device-mapper module is loaded:
 
# modprobe dm-mod
 
 
Now you need to do is tell LVM you have a Physical Volume for it to use. It is really a virtual RAID volume (<code>/dev/md0</code>), but LVM does not know this, or really care. Do:
 
# pvcreate /dev/md0
 
 
This might fail if you are using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.
 
 
LVM should report back that it has added the Physical Volume. You can confirm this with:
 
# pvdisplay
 
 
Now it is time to create a Volume Group (which I will call <code>array</code>) which has control over the LVM Physical Volume we created. Do:
 
# vgcreate array /dev/md0
 
 
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:
 
# vgdisplay
 
 
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> that fills all the free space left on the volume group:
 
# lvcreate -l +100%FREE array -n root
 
 
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:
 
# lvdisplay
 
 
The LVM volume should now be available as <code>/dev/mapper/array-root</code>. Or something similar, LVM will also be able to tell you which when you issue the display command.
 
 
=== Activate existing RAID devices and LVM volumes ===
 
 
If you already have RAID partitions created on your system and you have also set up LVM and all you want is enabling them
 
follow this simple procedure. ''This might come in handy if you are switching distributions and do not want to lose data in /home
 
for example.''
 
 
First you need to enable RAID support. RAID1 and RAID5 in this case.
 
# modprobe raid1
 
# modprobe raid5
 
 
Activate RAID devices: md1 for /boot and md0 for LVM where two logical volumes will reside.
 
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
 
# mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
 
 
RAID devices should now be enabled. Check /proc/mdstat.
 
 
If you have not loaded kernel LVM support do so now.
 
# modprobe dm-mod
 
 
Startup of LVM requires just the following two commands:
 
# vgscan
 
# vgchange -ay
 
 
You can now jump to '''[3] Set Filesystem Mountpoints''' in your menu based setup and mount created
 
partitions as needed.
 
 
=== Create and Mount the Filesystems ===
 
'''When you are using a setup that is newer then 2008.03; this step is optional!'''
 
 
Example using ReiserFS (V3):
 
 
To create /boot:
 
# mkreiserfs /dev/md1
 
 
To create swap space:
 
# mkswap /dev/md2
 
 
To create /:
 
# mkreiserfs /dev/array/root
 
 
Now, mount the boot and root partitions where the installer expects them:
 
# mount /dev/array/root /mnt
 
# mkdir /mnt/boot
 
# mount /dev/md1 /mnt/boot
 
 
We have created all our filesystems! And we are ready to install the OS!
 
 
=== Install and Configure Arch ===
 
 
This section does not attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you are having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.
 
 
Now you can continue using the installer to set-up the system and install the packages you need.
 
Here is the walkthrough:
 
 
* Type <code>/arch/setup</code> to launch the main installer.
 
* Select <code> <  OK  ></code> at the opening screen.
 
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).
 
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and have not created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''
 
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.
 
* Select <code>3 Install Packages</code>. This will take a little while.
 
* '''Note:''' Because the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it  contains comments on how to fill it correctly, and that is something mdadm can do automaticly for you. So let us delete the original and have mdadm create you a new one with the currect setup:<br>Press '''Alt-F2''' to get a new terminal an log in, then do
 
# mdadm --examine --scan > /mnt/etc/mdadm.conf
 
* Select <code>4 Configure System</code>:
 
 
Add the ''dm_mod'' module to the MODULES list in /etc/mkinitcpio.conf.
 
 
Add the ''mdadm'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf after udev.
 
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details.
 
 
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:
 
USELVM="yes"
 
 
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you have got your capitalization correct.
 
 
Edit your <code>/etc/fstab</code> to contain the entries:
 
<pre>
 
/dev/array/root        /      reiserfs        defaults        0      1
 
/dev/md2                swap    swap            defaults        0      0
 
/dev/md1                /boot  reiserfs        defaults        0      0
 
</pre>
 
 
At this point, make any other configuration changes you need to other files.
 
 
Then exit the configuration menu.
 
 
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.
 
 
 
'''Old style:'''
 
 
Then specify the raid array you are booting from in /mnt/boot/grub/menu.lst like:
 
  # Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':
 
    kernel /vmlinuz-linux root=/dev/array/root ro  md=1,/dev/sda1,/dev/sdb1,/dev/sdc1 md=0,/dev/sda3,/dev/sdb3,/dev/sdc3
 
 
 
'''Nowadays (2009.02), with the mdadm hook in the initrd it it no longer necessary to add kernel parameters concerning the RAID array(s).'''
 
 
The arrays can be assembled on boot by the kernel using that hook and the contents of /etc/mdadm.conf, which is included in the initrd image when it is built. (See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] )
 
 
An example of a GRUB boot configuration for booting of a root on an LVM volume like this:
 
<pre>
 
# (0) Arch Linux
 
title  Arch Linux
 
root  (hd0,0)
 
kernel /vmlinuz-linux root=/dev/array/root ro
 
initrd /initramfs-linux.img
 
</pre>
 
 
=== Install Grub on the Primary Hard Drive ===
 
 
==== grub 0.97 ====
 
 
<b>This can also be done from the installer just fine now (2009.08 and should also work for 2009.02)</b>
 
 
This is the last and final step before you have a bootable system!
 
 
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you are effectively inside your new system.  Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive.
 
 
Copy the GRUB files into place and get into our chroot:
 
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub
 
# sync
 
# mount -o bind /dev /mnt/dev
 
# mount -t proc none /mnt/proc
 
# mount -t sysfs none /mnt/sys
 
# chroot /mnt /bin/bash
 
 
At this point, you may no longer be able to see keys you type at your console. I am not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.
 
 
Once you have got console echo back on, type:
 
# grub
 
 
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:
 
grub> root (hd0,0)
 
grub> setup (hd0)
 
grub> quit
 
 
That is it.  You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.
 
 
==== grub 1.98 ====
 
 
You can also install [[grub2]] when you are in the ''chroot'' environment.
 
# mount -o bind /dev /mnt/dev
 
# mount -t proc none /mnt/proc
 
# mount -t sysfs none /mnt/sys
 
# chroot /mnt /bin/bash
 
 
Install and configure grub2
 
root@pc-chroot:~# pacman -S grub2
 
root@pc-chroot:~# grub-mkconfig -o /boot/grub/grub.cfg
 
root@pc-chroot:~# grub-install --no-floppy --modules="raid" /dev/sda
 
root@pc-chroot:~# grub-install --no-floppy --modules="raid" /dev/sdb
 
 
=== Reboot ===
 
 
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:
 
# reboot
 
 
=== Install Grub on the Alternate Boot Drives===
 
 
Once you have successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:
 
# grub
 
grub> device (hd0) /dev/sdb
 
grub> root (hd0,0)
 
grub> setup (hd0)
 
grub> device (hd0) /dev/sdc
 
grub> root (hd0,0)
 
grub> setup (hd0)
 
grub> quit
 
 
=== Archive your Filesystem Partition Scheme ===
 
 
Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:
 
# mkdir /etc/partitions
 
# sfdisk --dump /dev/sda >/etc/partitions/disc0.partitions
 
# sfdisk --dump /dev/sdb >/etc/partitions/disc1.partitions
 
# sfdisk --dump /dev/sdc >/etc/partitions/disc2.partitions
 
 
== Management ==
 
 
For LVM management, please have a look at [[LVM]]
 
 
== Mounting from a Live CD ==
 
 
If you want to mount your RAID partition from a Live CD, use
 
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
 
 
(or whatever mdX and drives apply to you)
 
 
{{Note | Live CDs like [http://www.sysresccd.org/Main_Page SystemrescueCD] assemble the RAID arrays automatically at boot time if you used the partition type fd at the install of the array)}}
 
 
==Removing device, stop using the array==
 
 
You can remove a device from the array after you mark it as faulty.
 
 
# mdadm --fail /dev/md0 /dev/sdxx
 
 
Then you can remove it from the array.
 
 
# mdadm -r /dev/md0 /dev/sdxx
 
 
Remove device permanently (for example in the case you want to use it individally from now on).
 
Issue the two commands described above then:
 
 
# mdadm --zero-superblock /dev/sdxx
 
 
After this you can use the disk as you did before creating the array.
 
 
{{Warning | If you reuse the removed disk without zeroing the superblock you will '''LOSE''' all your data next boot. (After mdadm will try to use it as the part of the raid array). '''DO NOT''' issue this command on linear or RAID0 arrays or you will '''LOSE''' all your data on the raid array. }}
 
 
Stop using an array:
 
# Umount target array
 
# Repeat the three command described in the beginning of this section on each device.
 
# Stop the array with: <code>mdadm --stop /dev/md0</code>
 
# Remove the corresponding line from /etc/mdadm.conf
 
 
== Adding a device to the array ==
 
Adding new devices with mdadm can be done on a running system with the devices mounted.
 
Partition the new device "/dev/sdx" using the same layout as one of those already in the arrays "/dev/sda".
 
# sfdisk -d /dev/sda > table
 
# sdfisk /dev/sdx < table
 
 
Assemble the RAID arrays if they are not already assembled:
 
# mdadm --assemble /dev/md1 /dev/sda1 /dev/sdb1 /dev/sdc1
 
# mdadm --assemble /dev/md2 /dev/sda2 /dev/sdb2 /dev/sdc2
 
# mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3
 
 
First, add the new device as a Spare Device to all of the arrays. We will assume you have followed the guide and use separate arrays for /boot RAID 1 (/dev/md1), swap RAID 1 (/dev/md2) and root RAID 5 (/dev/md0).
 
# mdadm --add /dev/md1 /dev/sdx1
 
# mdadm --add /dev/md2 /dev/sdx2
 
# mdadm --add /dev/md0 /dev/sdx3
 
 
This should not take long for mdadm to do. Check the progress with:
 
# cat /proc/mdstat
 
 
Check that the device has been added with the command:
 
# mdadm --misc --detail /dev/md0
 
 
It should be listed as a Spare Device.
 
 
Tell mdadm to grow the arrays from 3 devices to 4 (or however many devices you want to use):
 
# mdadm --grow -n 4 /dev/md1
 
# mdadm --grow -n 4 /dev/md2
 
# mdadm --grow -n 4 /dev/md0
 
 
This will probably take several hours. You need to wait for it to finish before you can continue. Check the progress in /proc/mdstat. The RAID 1 arrays should automatically sync /boot and swap but you need to install Grub on the MBR of the new device manually. [[Installing_with_Software_RAID_or_LVM#Install_Grub_on_the_Alternate_Boot_Drives]]
 
 
The rest of this guide will explain how to resize the underlying LVM and filesystem on the RAID 5 array.
 
{{Note|I am not sure if this can be done with the volumes mounted and will assume you are booting from a live-cd/usb}}
 
 
If you are have encrypted your LVM volumes with LUKS, you need resize the LUKS volume first.  Otherwise, ignore this step.
 
# cryptsetup luksOpen /dev/md0 cryptedlvm
 
# cryptsetup resize cryptedlvm
 
 
Activate the LVM volume groups:
 
# vgscan
 
# vgchange -ay
 
 
Resize the LVM Physical Volume /dev/md0 (or e.g. /dev/mapper/cryptedlvm if using LUKS) to take up all the available space on the array. You can list them with the command "pvdisplay".
 
# pvresize /dev/md0
 
 
Resize the Logical Volume you wish to allocate the new space to. You can list them with "lvdisplay". Assuming you want to put it all to your /home volume:
 
# lvresize -l +100%FREE /dev/array/home
 
 
To resize the filesystem to allocate the new space use the appropriate tool. If using ext2 you can resize a mounted filesystem with ext2online. For ext3 you can use resize2fs or ext2resize but not while mounted.
 
 
You should check the filesystem before resizing.
 
# e2fsck -f /dev/array/home
 
# resize2fs /dev/array/home
 
 
Read the manuals for lvresize and resize2fs if you want to customize the sizes for the volumes.
 
 
==Troubleshooting==
 
If you are getting error when you reboot about "invalid raid superblock magic" and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line in /boot/grub/menu.lst accordingly. This is what happened to me anyway.
 
 
===Recovering from a broken or missing drive in the raid===
 
You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to fore the raid to still turn on even with one disk short. Type this (change where needed):
 
# mdadm --manage /dev/md0 --run
 
 
Now you should be able to mount it again with something like this (if you had it in fstab):
 
# mount /dev/md0
 
 
Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Partition_the_Hard_Drives. Once that is done you can add the new disk to the raid by doing:
 
# mdadm --manage --add /dev/md0 /dev/sdd1
 
 
If you type:
 
# cat /proc/mdstat
 
you probably see that the raid is now active and rebuilding.
 
 
You also might want to update your /etc/mdadm.conf file by typing:
 
# mdadm --examine --scan > /etc/mdadm.conf
 
 
That should be about all steps required to recover your raid. It certainly worked for me when i had lost a dive due to a partition table corruption.
 
 
== Benchmarking ==
 
There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.
 
 
[http://sourceforge.net/projects/tiobench/ Tiobench] specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.
 
 
[http://www.coker.com.au/bonnie++/ Bonnie++] tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed [http://www.coker.com.au/bonnie++/zcav/ ZCAV] program tests the performance of different zones of a hard drive without writing any data to the disk.
 
 
{{codeline|hdparm}} should '''NOT''' be used to benchmark a RAID, because it provides very inconsistent results.
 
 
== Additional Resources==
 
 
=== LVM ===
 
* [http://sourceware.org/lvm2/ LVM2 Resource Page] on SourceWare.org
 
 
=== Software RAID ===
 
* [http://en.gentoo-wiki.com/wiki/RAID/Software RAID/Software] on the [http://en.gentoo-wiki.com/wiki/Main_Page Gentoo Wiki]
 
* [http://en.gentoo-wiki.com/wiki/Software_RAID_Install Software RAID Install] on the [http://en.gentoo-wiki.com/wiki/Main_Page Gentoo Wiki]
 
* [http://www.gentoo.org/doc/en/articles/software-raid-p1.xml Software RAID in the new Linux 2.4 kernel, Part 1] and [http://www.gentoo.org/doc/en/articles/software-raid-p2.xml Part 2] in the [http://www.gentoo.org/doc/en/index.xml Gentoo Linux Docs]
 
* [http://raid.wiki.kernel.org/index.php/Linux_Raid Linux RAID wiki entry] on [http://www.kernel.org/ The Linux Kernel Archives]
 
* [http://linux-101.org/howto/arch-linux-software-raid-installation-guide Arch Linux software RAID installation guide] on [http://linux-101.org/ Linux 101]
 
* [http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6/html/Storage_Administration_Guide/ch-raid.html Chapter 15: Redundant Array of Independent Disks (RAID)] of Red Hat Enterprise Linux 6 Documentation
 
* [http://tldp.org/FAQ/Linux-RAID-FAQ/x37.html Linux-RAID FAQ] on the [http://tldp.org/ Linux Documentation Project]
 
 
=== RAID & LVM ===
 
* [http://yannickloth.be/blog/2010/08/01/installing-archlinux-with-software-raid1-encrypted-filesystem-and-lvm2/ Setup Arch Linux on top of raid, LVM2 and encrypted partitions] by Yannick Loth
 
* [http://stackoverflow.com/questions/237434/raid-verses-lvm RAID vs. LVM] on [[Wikipedia:Stack Overflow|Stack Overflow]]
 
* [http://serverfault.com/questions/217666/what-is-better-lvm-on-raid-or-raid-on-lvm What is better LVM on RAID or RAID on LVM?] on [[Wikipedia:Server Fault|Server Fault]]
 
* [http://www.gagme.com/greg/linux/raid-lvm.php Managing RAID and LVM with Linux (v0.5)] by Gregory Gulik
 
 
=== Forums threads ===
 
* 2011-04-20 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?pid=965357 Software RAID and LVM questions]
 
* 2011-03-12 - Arch Linux - [https://bbs.archlinux.org/viewtopic.php?id=114965 Some newbie questions about installation, LVM, grub, RAID]
 
* 2011-07-29 - Gentoo - [http://forums.gentoo.org/viewtopic-t-888624-start-0.html Use RAID metadata 1.2 in boot and root partition]
 

Latest revision as of 09:24, 18 September 2011