Installing with Fake RAID
Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary heading Template:Article summary link Template:Article summary end
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from
/dev/mapper/chipsetName_randomName and not
- 1 What is "fake RAID"
- 2 History
- 3 Supported hardware
- 4 Preparation
- 5 Boot the installer
- 6 Load dmraid
- 7 Perform traditional installation
- 8 install bootloader
- 9 Troubleshooting
What is "fake RAID"
- Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
- These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".wikipedia:RAID
Despite the terminology, "fake RAID" viais a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted.
In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.
Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.
- Tested with ICH10R on 2009.08 (x86_64) -- pointone 23:10, 29 November 2009 (EST)
- Tested with Sil3124 on 2009.02 (i686) -- loosec
- Tested with nForce4 on Core Dump (i686 and x86_64) -- loosec
- Tested with Sil3512 on Overlord (x86_64) -- loosec
- Tested with nForce2 on 2011.05 (i686) -- Jere2001; drankinatty
- Tested with nVidia MCP78S on 2011.06 (x86_64) -- drankinatty
- Tested with nVidia CK804 on 2011.06 (x86_64) -- drankinatty
- Tested with AMD Option ROM Utility using pdc_adma on 2011.12 (x86_64)
- Open up any needed guides (e.g. Beginners' Guide, Official Arch Linux Install Guide) on another machine. If you do not have access to another machine, print it out.
- Download the latest Arch Linux install image.
- Backup all important files since everything on the target partitions will be destroyed.
Configure RAID sets
- Enter your BIOS setup and enable the RAID controller.
- The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
- Save and exit the BIOS setup. During boot, enter the RAID setup utility.
- The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.
- Use the RAID setup utility to create preferred stripe/mirror sets.
Boot the installer
See Official Arch Linux Install Guide#Pre-Installation for details.
Load device-mapper and find RAID sets:
# modprobe dm_mod # dmraid -ay # ls -la /dev/mapper/
/dev/mapper/control <- Created by device-mapper; if present, device-mapper is likely functioning /dev/mapper/sil_aiageicechah <- A RAID set on a Silicon Image SATA RAID controller /dev/mapper/sil_aiageicechah1 <- First partition on this RAID Set
If there is only one file (
/dev/mapper/control), check if your controller chipset module is loaded with
lsmod. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller).
If your chipset module is NOT loaded, load it now. For example:
# modprobe sata_sil
/lib/modules/`uname -r`/kernel/drivers/ata/ for available drivers.
To test the RAID sets:
# dmraid -tay
Perform traditional installation
Switch to tty2 and start the installer:
Partition the RAID set
- Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
- Choose OTHER and type in your RAID set's full path (e.g.
/dev/mapper/sil_aiageicechah). Switch back to tty1 to check your spelling.
- Create the proper partitions the normal way.
Mounting the filesystem
If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:
- Switch back to tty1.
- Deactivate all device-mapper nodes:
# dmsetup remove_all
- Reactivate the newly-created RAID nodes:
# dmraid -ay # ls -la /dev/mapper
- Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Install and configure Arch
Re-activate the installer (tty2) and proceed as normal with the following exceptions:
- Select Packages
- Ensure dmraid is marked for installation
- Configure System
- Add dm_mod to the MODULES line in
mkinitcpio.conf. If using a mirrored (RAID 1) array, additionally add dm_mirror
- Add chipset_module_driver to the MODULES line if necessary
- Add dmraid to the HOOKS line in
mkinitcpio.conf; preferably after sata but before filesystems
- Add dm_mod to the MODULES line in
Please read GRUB2 for more information about configuring GRUB2. Currently, the latest version of grub-bios does not compatiable with fake-raid. If you got an error like this when you run grub-install:
$ grub-install /dev/mapper/sil_aiageicechah Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.
1. download an old version package for grub
i686: http://arm.konnichi.com/extra/os/i686/grub2-bios-1:1.99-6-i686.pkg.tar.xz http://arm.konnichi.com/extra/os/i686/grub2-common-1:1.99-6-i686.pkg.tar.xz x86_64: http://arm.konnichi.com/extra/os/x86_64/grub2-bios-1:1.99-6-x86_64.pkg.tar.xz http://arm.konnichi.com/extra/os/x86_64/grub2-common-1:1.99-6-x86_64.pkg.tar.xz
You could verify these packages by the .sig file if you take care.
2. install these old version packages by using "pacman -U *.pkg.tar.xz"
3. (Optional) Installif you have other OS like windows.
4. $ grub-install /dev/mapper/sil_aiageicechah
5. $ grub-mkconfig -o /boot/grub/grub.cfg
6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it.
That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.
Booting with degraded array
One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility.
Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:
- Edit the kernel line from the GRUB menu
- Remove references to dmraid devices (e.g. change
disablehooks=dmraidto prevent a kernel panic when dmraid discovers the degraded array
- Remove references to dmraid devices (e.g. change
- Boot the system
Error: Unable to determine major/minor number of root device
If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:
Activating dmraid arrays... no block devices found Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5 Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it. Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'
To work around this problem:
- boot the Fallback kernel
- insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
HOOKS="base udev sleep autodetect block dmraid filesystems"
- rebuild the kernel image and reboot
dmraid mirror fails to activate
Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array?
This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:
# cd /lib/udev/rules.d # mkdir disabled # mv 64-md-raid.rules disabled/ # reboot