Installing with Fake RAID
Template:Article summary start Template:Article summary text Template:Article summary heading Template:I18n entry Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary heading Template:Article summary link Template:Article summary link Template:Article summary end
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from Template:Filename and not Template:Filename.
What is "fake RAID"
- Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
- These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers don't suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".
Despite the terminology, "fake RAID" via Template:Package Official is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted.
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.
- Tested with ICH10R on 2009.08 (x86_64) -- pointone 23:10, 29 November 2009 (EST)
- Tested with Sil3124 on 2009.02 (i686) -- loosec
- Tested with nForce4 on Core Dump (i686 and x86_64) -- loosec
- Tested with Sil3512 on Overlord (x86_64) -- loosec
For more information on supported hardware, see RAID/Onboard @ Gentoo Linux Wiki
- Boot the installer
- Install dmraid
- Perform traditional installation
- Install GRUB
- Print out any needed guides (e.g. Beginners' Guide, Official Arch Linux Install Guide).
- Download the latest Arch Linux install image.
- Backup all important files since everything on the target partitions will be destroyed.
Configure RAID sets
- Enter your BIOS setup and enable the RAID controller.
- The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
- Save and exit the BIOS setup. During boot, enter the RAID setup utility.
- The RAID utility is usually either accessible via the boot menu (often F8 or F10) or whilst the RAID controller is initializing.
- Use the RAID setup utility to create preferred stripe/mirror sets.
Boot the installer
Load device-mapper; install dmraid package and find RAID sets:
# modprobe dm_mod # pacman -S dmraid* # dmraid -ay # ls -la /dev/mapper/
/dev/mapper/control <- Created by device-mapper; if present, device-mapper is likely functioning /dev/mapper/sil_aiageicechah <- A RAID set on a Silicon Image SATA RAID controller /dev/mapper/sil_aiageicechah1 <- First partition on this RAID Set
If there is only one file (Template:Filename), check if your controller chipset module is loaded with Template:Codeline. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller).
If your chipset module is NOT loaded, load it now. For example:
# modprobe sata_sil
See Template:Filename for available drivers.
To test the RAID sets:
# dmraid -tay
Perform traditional installation
Switch to tty2 and start the installer:
Partition the RAID set
- Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
- Choose OTHER and type in your RAID set's full path (e.g. Template:Filename). Switch back to tty1 to check your spelling.
- Create the proper partitions the normal way.
Mounting the filesystem
If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:
- Switch back to tty1.
- Deactivate all device-mapper nodes:
# dmsetup remove_all
- Reactivate the newly-created RAID nodes:
# dmraid -ay # ls -la /dev/mapper
- Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Install and configure Arch
Re-activate the installer (tty2) and proceed as normal with the following exceptions:
- Select Packages
- Ensure dmraid is marked for installation
- Configure System
- Install bootloader
- In menu.lst I had to enter (hd0,0) after root on two lines, don't know why this was not already there.
- We are not installing on software RAID or LVM.
- Make sure you correctly designate the bootable partition
- Note: Depending on how you have partitioned your array you may need to specify something other than (hd0,0) for your "GRUB" root partition. This is not the same as the "Linux" root partition in all cases. The correct number to designate for the GRUB root partition (in the menu.lst file) is the number for the partition containing the /boot partition. That will only be the / (Linux root) partition if you have not created a separate /boot partition. If you have created a separate /boot partition during partitioning you must designate the /boot partition as the "GRUB" root for the install. For example, if you created logical partitions, creating the equivalent of sda5, sda6, sda7, etc.. that were mapped into the device mapper as:
/dev/mapper | Linux GRUB Partition | Partition Number nvidia_fffadgic | nvidia_fffadgic5 | / 4 nvidia_fffadgic6 | /boot 5 nvidia_fffadgic7 | /home 6 nvidia_fffadgic8 | swap
- The correct "GRUB" root designation would be (hd0,5)
- Additionally Note: if you have more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidia_fdaacfde and 2 disks in nvidia_fffadgic and you are installing to the second dmraid array (nvidia_fffadgic), you will need designate the second array's /boot partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root (hd1,5).
- Right about here the GRUB installer will FAIL. However it will still copy files to /boot. DO NOT GIVE UP AND REBOOT Just follow the directions below:
- Also, if you did not create a separate /boot partition, then have a look into the menu.lst, the one generated will have a wrong path. Instead of /vmlinuz it has to be /boot/vmlinuz and also /boot/kernel26.img. Note that this is not needed if you have a dedicated partition for /boot in your partition scheme.
- Switch to tty1 and chroot into our installed system:
# mount -o bind /dev /mnt/dev # mount -t proc none /mnt/proc # mount -t sysfs none /mnt/sys # chroot /mnt/ /bin/bash/
- Switch to tty3 and look up the geometry of our RAID Set.
- The number of Cylinders, Heads and Sectors on the RAID Set should be written up high inside cfdisk.
18079 255 63for a RAID Stripe of two 74GB Raptor discs.
38914 255 63for a RAID Stripe of two 160GB laptop discs.
- GRUB does not understand this and needs to be told with 'geometry'.
- Switch to tty1, the chrooted environment.
- Install GRUB on
- Exchange C H S below with the proper numbers and be aware that they are not entered in the same order as they are read from cfdisk.
# dmsetup mknodes # grub --device-map=/dev/null grub> device (hd0) /dev/mapper/raid_set grub> geometry (hd0) C H S
- If geometry is entered properly, GRUB will spit out a list of partitions found on this RAID Set. Then, continue to install the boot loader into the Master Boot Record, changing "hd0" to "hd1" if required. Do not alter the ,0 here as it is specifying the MBR, the correct partition to actually boot from is taken care of when the menu.lst file is read by the boot loader after reading the MBR.
grub> root (hd0,0) grub> setup (hd0) grub> quit # reboot
- Create /boot/grub/device.map if multiple dmraid devices are present
- Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the /boot/grub/device.map file to help grub retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
(hd0) /dev/mapper/nvidia_fdaacfde (hd1) /dev/mapper/nvidia_fffadgic (fd0) /dev/fd0
- You can delete the entry for the floppy drive (fd0) if you don't have one.
And now we are done. =)