Difference between revisions of "Installing with Fake RAID"

From ArchWiki
Jump to: navigation, search
(Supported hardware)
m (No block devices for partitions on existing RAID array: link to package instead of pacman)
(33 intermediate revisions by 13 users not shown)
Line 1: Line 1:
[[Category:Getting and installing Arch (English)]]
+
[[Category:Getting and installing Arch]]
[[Category:File systems (English)]]
+
[[Category:File systems]]
{{i18n|Installing with Fake RAID}}
+
[[pt:Installing with Fake RAID]]
 +
[[zh-CN:Installing with Fake RAID]]
 +
{{Related articles start}}
 +
{{Related|Installing with Software RAID or LVM}}
 +
{{Related|Convert a single drive system to RAID}}
 +
{{Related|Installation guide}}
 +
{{Related|Beginners' guide}}
 +
{{Related articles end}}
  
{{Article summary start}}
+
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions '''inside''' the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from {{ic|/dev/mapper/chipsetName_randomName}} and not {{ic|/dev/sdX}}.
{{Article summary text|Provides detailed instructions for installing Arch Linux on "fake RAID" volumes. This guide is intended to supplement the [[Official Arch Linux Install Guide]] or the [[Beginners' Guide]].}}
+
{{Article summary heading|Related}}
+
{{Article summary wiki|Installing with Software RAID or LVM}}
+
{{Article summary wiki|Convert a single drive system to RAID}}
+
{{Article summary heading|Resources}}
+
{{Article summary link|RAID/Onboard @ Gentoo Linux Wiki|http://en.gentoo-wiki.com/wiki/RAID/Onboard}}
+
{{Article summary link|Related forum thread|2=http://bbs.archlinux.org/viewtopic.php?id=22038}}
+
{{Article summary end}}
+
 
+
The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions '''inside''' the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from {{Filename|/dev/mapper/chipsetName_randomName}} and not {{Filename|/dev/sdX}}.
+
  
 
== What is "fake RAID" ==
 
== What is "fake RAID" ==
  
From Wikipedia:
+
From [[Wikipedia:RAID]]:
  
 
:''Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.''
 
:''Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.''
  
:''These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".''[http://en.wikipedia.org/wiki/RAID]
+
:''These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".''
  
See [[Wikipedia:RAID]] or [https://help.ubuntu.com/community/FakeRaidHowto FakeRaidHowto @ Community Ubuntu Documentation] for more information.
+
See also [https://help.ubuntu.com/community/FakeRaidHowto FakeRaidHowto @ Community Ubuntu Documentation] for more information.
  
Despite the terminology, "fake RAID" via {{Package Official|dmraid}} is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure '''before''' the system is ever booted.
+
Despite the terminology, "fake RAID" via {{Pkg|dmraid}} is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure '''before''' the system is ever booted. However, be aware that not all BIOS RAID implementations support drive rebuilding. Instead they rely on non-linux software to perform the rebuild. If your system cannot rebuild a drive in the BIOS RAID setup utility, you are strongly encouraged to use mdraid (pure Linux Software Raid via mdadm - see [[RAID]]) instead of dmraid or you will find yourself unable to rebuild an array in case of a drive failure - or unable to retrieve information from your array in case of a motherboard failure without a lot of additional work.
  
 
== History ==
 
== History ==
Line 42: Line 39:
 
* Tested with nVidia MCP78S on ''2011.06'' (x86_64) --  [[User:drankinatty|drankinatty]]
 
* Tested with nVidia MCP78S on ''2011.06'' (x86_64) --  [[User:drankinatty|drankinatty]]
 
* Tested with nVidia CK804 on ''2011.06'' (x86_64) --  [[User:drankinatty|drankinatty]]
 
* Tested with nVidia CK804 on ''2011.06'' (x86_64) --  [[User:drankinatty|drankinatty]]
* Tested with AMD Option ROM Utility using pdc_adma on ''2011.12'' (x86_64) --  [[ubuntologist]]
+
* Tested with AMD Option ROM Utility using pdc_adma on ''2011.12'' (x86_64)
  
For more information on supported hardware, see [http://en.gentoo-wiki.com/wiki/RAID/Onboard RAID/Onboard @ Gentoo Linux Wiki]
+
{{Out of date|The installation steps do not reflect the current ArchLinux installation procedure. Need to be updated. Btw, it appears that Intel now recommends mdadm instead of dmraid (see Discussion). Update in progress.}}
 
+
== Backup ==
+
  
 +
== Preparation ==
 
{{Warning|Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. '''Consider yourself warned!'''}}
 
{{Warning|Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. '''Consider yourself warned!'''}}
  
== Outline ==
+
*Open up any needed guides (e.g. [[Beginners' guide]], [[Installation guide]]) on another machine. If you do not have access to another machine, print it out.
 
+
* Preparation
+
* Boot the installer
+
* Load dmraid
+
* Perform traditional installation
+
* Install GRUB
+
 
+
== Preparation ==
+
 
+
*Open up any needed guides (e.g. [[Beginners' Guide]], [[Official Arch Linux Install Guide]]) on another machine. If you do not have access to another machine, print it out.
+
 
*Download the latest Arch Linux install image.
 
*Download the latest Arch Linux install image.
 
*Backup all important files since everything on the target partitions will be destroyed.
 
*Backup all important files since everything on the target partitions will be destroyed.
Line 78: Line 64:
 
== Boot the installer ==
 
== Boot the installer ==
  
See [[Official Arch Linux Install Guide#Pre-Installation]] for details.
+
See [[Installation guide#Pre-Installation]] for details.
  
 
== Load dmraid ==
 
== Load dmraid ==
Line 87: Line 73:
 
  # dmraid -ay
 
  # dmraid -ay
 
  # ls -la /dev/mapper/
 
  # ls -la /dev/mapper/
 +
 +
{{Warning| Command "dmraid -ay" could fail after boot to Arch linux Release: 2011.08.19 as image file with initial ramdisk environment does not support dmraid. You could use an older Release: 2010.05. Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming}}
  
 
Example output:
 
Example output:
Line 94: Line 82:
 
  /dev/mapper/sil_aiageicechah1  <- First partition on this RAID Set
 
  /dev/mapper/sil_aiageicechah1  <- First partition on this RAID Set
  
If there is only one file ({{Filename|/dev/mapper/control}}), check if your controller chipset module is loaded with {{Ic|lsmod}}. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use [[Installing with Software RAID or LVM|software RAID]] (this means no dual-booted RAID system on this controller).
+
If there is only one file ({{ic|/dev/mapper/control}}), check if your controller chipset module is loaded with {{Ic|lsmod}}. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use [[Installing with Software RAID or LVM|software RAID]] (this means no dual-booted RAID system on this controller).
  
 
If your chipset module is NOT loaded, load it now. For example:
 
If your chipset module is NOT loaded, load it now. For example:
Line 100: Line 88:
 
  # modprobe sata_sil
 
  # modprobe sata_sil
  
See {{Filename|/lib/modules/`uname -r`/kernel/drivers/ata/}} for available drivers.
+
See {{ic|/lib/modules/`uname -r`/kernel/drivers/ata/}} for available drivers.
  
 
To test the RAID sets:
 
To test the RAID sets:
Line 115: Line 103:
  
 
*Under '''Prepare Hard Drive''' choose '''Manually partition hard drives''' since the '''Auto-prepare''' option will '''not''' find your RAID sets.
 
*Under '''Prepare Hard Drive''' choose '''Manually partition hard drives''' since the '''Auto-prepare''' option will '''not''' find your RAID sets.
*Choose OTHER and type in your RAID set's full path (e.g. {{Filename|/dev/mapper/sil_aiageicechah}}). Switch back to '''tty1''' to check your spelling.
+
*Choose OTHER and type in your RAID set's full path (e.g. {{ic|/dev/mapper/sil_aiageicechah}}). Switch back to '''tty1''' to check your spelling.
 
*Create the proper partitions the normal way.
 
*Create the proper partitions the normal way.
  
Line 153: Line 141:
  
 
*Configure System
 
*Configure System
**Add '''dm_mod''' to the MODULES line in {{Filename|mkinitcpio.conf}}. If using a mirrored (RAID 1) array, additionally add '''dm_mirror'''
+
**Add '''dm_mod''' to the MODULES line in {{ic|mkinitcpio.conf}}. If using a mirrored (RAID 1) array, additionally add '''dm_mirror'''
 
**Add '''chipset_module_driver''' to the MODULES line if necessary
 
**Add '''chipset_module_driver''' to the MODULES line if necessary
**Add '''dmraid''' to the HOOKS line in {{Filename|mkinitcpio.conf}}; preferably after '''sata''' but before '''filesystems'''
+
**Add '''dmraid''' to the HOOKS line in {{ic|mkinitcpio.conf}}; preferably after '''sata''' but before '''filesystems'''
  
== Install GRUB ==
+
== install bootloader ==
  
{{Warning|You can normally specify '''default saved''' instead of a number in {{Filename|menu.lst}} so that the default entry is the entry saved with the command '''savedefault'''. If you are using dmraid do not use '''savedefault''' or your array will de-sync and will not let you boot your system.}}
+
=== Use GRUB2 ===
  
Please read [[GRUB]] for more information about configuring GRUB. Installation is begun by selecting '''Install Bootloader''' from the Arch installer.
+
Please read [[GRUB2]] for more information about configuring GRUB2. Currently, the latest version of grub-bios is not compatiable with fake-raid. If you got an error like this when you run grub-install:
  
{{Note|For an unknown reason, the default {{Filename|menu.lst}} will likely be incorrectly populated when installing via fake RAID. Double-check the '''root''' lines (e.g. {{Ic|root (hd0,0)}}).
+
  $ grub-install /dev/mapper/sil_aiageicechah
 +
  Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.
  
Additionally, if you did '''not''' create a separate {{Filename|/boot}} partition, ensure the kernel/initrd paths are correct (e.g. {{Filename|/boot/vmlinuz}} and {{Filename|/boot/kernel26.img}} instead of {{Filename|/vmlinuz}} and {{Filename|/kernel26.img}}.}}
+
You could try an old version of grub. Check [[AUR]] for available packages.
  
For example, if you created logical partitions (creating the equivalent of sda5, sda6, sda7, etc.) that were mapped as:
+
1. download an old version package for grub
  
<pre>
+
2. install these old version packages by using "pacman -U *.pkg.tar.xz"
  /dev/mapper    |    Linux    GRUB Partition
+
                  |  Partition      Number
+
nvidia_fffadgic  |
+
nvidia_fffadgic5  |    /              4
+
nvidia_fffadgic6  |    /boot          5
+
nvidia_fffadgic7  |    /home          6
+
</pre>
+
  
The correct root designation would be '''(hd0,5)''' in this example.
+
3. (Optional) Install {{Pkg|os-prober}} if you have other OS like windows.
  
{{Note|If you use more than one set of dmraid arrays or multiple Linux distributions installed on different dmraid arrays (for example 2 disks in nvidia_fdaacfde and 2 disks in nvidia_fffadgic and you are installing to the second dmraid array (nvidia_fffadgic)), you will need designate the second array's {{Filename|/boot}} partition as the GRUB root. In the example above, if nvidia_fffadgic was the second dmraid array you were installing to, your root designation would be root '''(hd1,5)'''.}}
+
4. $ grub-install /dev/mapper/sil_aiageicechah
  
After saving the configuration file, the GRUB installer will '''FAIL'''. However it will still copy files to {{Filename|/boot}}. '''DO NOT GIVE UP AND REBOOT''' -- just follow the directions below:
+
5. $ grub-mkconfig -o /boot/grub/grub.cfg
  
*Switch to '''tty1''' and [[chroot]] into our installed system:
+
6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it.
  
# mount -o bind /dev /mnt/dev
+
That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.
# mount -t proc none /mnt/proc
+
# mount -t sysfs none /mnt/sys
+
# chroot /mnt /bin/bash
+
 
+
*Switch to '''tty3''' and look up the geometry of the RAID set. In order for cfdisk to find the array and provide the proper C H S information, you may need to start cfdisk providing your raid set as the first argument. (i.e. cfdisk /dev/mapper/nvidia_fffadgic):
+
**The number of '''C'''ylinders, '''H'''eads and '''S'''ectors on the RAID set should be written at the top of the screen inside cfdisk. '''Note:''' cfdisk shows the information in '''H S C''' order, but grub requires you to enter the geometry information in '''C H S''' order.
+
:Example: <code>18079 255 63</code> for a RAID stripe of two 74GB Raptor discs.
+
:Example: <code>38914 255 63</code> for a RAID stripe of two 160GB laptop discs.
+
*GRUB will fail to properly read the drives; the '''geometry''' command must be used to manually direct GRUB:
+
**Switch to '''tty1''', the chrooted environment.
+
**Install GRUB on {{Filename|/dev/mapper/raidSet}}:
+
 
+
# dmsetup mknodes
+
# grub --device-map=/dev/null
+
+
grub> device (hd0) /dev/mapper/raidSet
+
grub> geometry (hd0) C H S
+
 
+
Exchange '''C H S''' above with the proper numbers (be aware: they are '''not''' entered in the same order as they are read from cfdisk).
+
 
+
If geometry is entered properly, GRUB will list partitions found on this RAID set. You can confirm that grub is using the correct geometry and verify the proper grub root device to boot from by using the grub find command. If you have created a separate boot partition, then search for /grub/stage1 with find. If you have no separate boot partition, then search /boot/grub/stage1 with find. Examples:
+
 
+
grub> find /grub/stage1      # use when you have a separate boot partition
+
grub> find /boot/grub/stage1  # use when you have no separate boot partition
+
 
+
Grub will report the proper device to designate as the grub root below (i.e.  (hd0,0), (hd0,4), etc...) Then, continue to install the bootloader into the Master Boot Record, changing "hd0" to "hd1" if required.
+
 
+
grub> root (hd0,0)
+
grub> setup (hd0)
+
grub> quit
+
 
+
{{Note|1=With dmraid >= 1.0.0.rc15-8, partitions are labeled "raidSet'''p1''', raidSet'''p2''', etc. instead of raidSet'''1''', raidSet'''2''', etc. If the setup command fails with "error 22: No such partition", temporary symlinks must be created.[http://bugs.gentoo.org/275566]
+
 
+
The problem is that GRUB still uses an older detection algorithm, and is looking for {{Filename|/dev/mapper/raidSet1}} instead of {{Filename|/dev/mapper/raidSetp1}}.
+
 
+
The solution is to create a symlink from {{Filename|/dev/mapper/raidSetp1}} to {{Filename|/dev/mapper/raidSet1}} (changing the partition number as needed). The simplest way to accomplish this is to:}}
+
# cd /dev/mapper
+
# for i in raidSetp*; do ln -s $i ${i/p/}; done
+
 
+
Lastly, if you have multiple dmraid devices with multiple sets of arrays set up (say: nvidia_fdaacfde and nvidia_fffadgic), then create the {{Filename|/boot/grub/device.map}} file to help GRUB retain its sanity when working with the arrays. All the file does is map the dmraid device to a traditional hd#. Using these dmraid devices, your device.map file will look like this:
+
 
+
(hd0) /dev/mapper/nvidia_fdaacfde
+
(hd1) /dev/mapper/nvidia_fffadgic
+
 
+
And now you are finished with the installation!
+
 
+
# reboot
+
  
 
== Troubleshooting ==  
 
== Troubleshooting ==  
Line 244: Line 179:
  
 
# Edit the '''kernel''' line from the [[GRUB]] menu
 
# Edit the '''kernel''' line from the [[GRUB]] menu
## Remove references to dmraid devices (e.g. change {{Filename|/dev/mapper/raidSet1}} to {{Filename|/dev/sda1}})
+
## Remove references to dmraid devices (e.g. change {{ic|/dev/mapper/raidSet1}} to {{ic|/dev/sda1}})
 
## Append {{Ic|<nowiki>disablehooks=dmraid</nowiki>}} to prevent a kernel panic when dmraid discovers the degraded array
 
## Append {{Ic|<nowiki>disablehooks=dmraid</nowiki>}} to prevent a kernel panic when dmraid discovers the degraded array
 
# Boot the system
 
# Boot the system
Line 262: Line 197:
 
:* insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
 
:* insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
  
  HOOKS="base udev sleep autodetect pata scsi sata dmraid filesystems"
+
  HOOKS="base udev sleep autodetect block dmraid filesystems"
  
 
:* rebuild the kernel image and reboot
 
:* rebuild the kernel image and reboot
 +
 +
=== dmraid mirror fails to activate ===
 +
 +
Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array?
 +
 +
This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:
 +
 +
# cd /lib/udev/rules.d
 +
# mkdir disabled
 +
# mv 64-md-raid.rules disabled/
 +
# reboot
 +
 +
=== No block devices for partitions on existing RAID array ===
 +
 +
If your existing array, set up before attempting to install arch, appears in {{ic|/dev/mapper/raidnamehere}}, but does not have any partitions ({{ic|raidnamehere1}}, etc) re-check the status of your RAID partitions.
 +
 +
Arch may not create block devices for partitions ''that work in another OS'' if there are certain, even minor, problems.
 +
 +
{{Pkg|gparted}} is useful to diagnose and repair most problems. Unfortunately, you may have to repartition from scratch.
 +
 +
== See also ==
 +
 +
* [https://bbs.archlinux.org/viewtopic.php?id=22038 Related forum thread]

Revision as of 08:12, 16 March 2014

The purpose of this guide is to enable use of a RAID set created by the on-board BIOS RAID controller and thereby allow dual-booting of Linux and Windows from partitions inside the RAID set using GRUB. When using so-called "fake RAID" or "host RAID", the disc sets are reached from /dev/mapper/chipsetName_randomName and not /dev/sdX.

What is "fake RAID"

From Wikipedia:RAID:

Operating system-based RAID doesn't always protect the boot process and is generally impractical on desktop versions of Windows. Hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a RAID controller chip, but simply a standard disk controller chip with special firmware and drivers. During early stage boot-up, the RAID is implemented by the firmware. When a protected-mode operating system kernel such as Linux or a modern version of Microsoft Windows is loaded, the drivers take over.
These controllers are described by their manufacturers as RAID controllers, and it is rarely made clear to purchasers that the burden of RAID processing is borne by the host computer's central processing unit -- not the RAID controller itself -- thus introducing the aforementioned CPU overhead which hardware controllers do not suffer from. Firmware controllers often can only use certain types of hard drives in their RAID arrays (e.g. SATA for Intel Matrix RAID, as there is neither SCSI nor PATA support in modern Intel ICH southbridges; however, motherboard makers implement RAID controllers outside of the southbridge on some motherboards). Before their introduction, a "RAID controller" implied that the controller did the processing, and the new type has become known in technically knowledgeable circles as "fake RAID" even though the RAID itself is implemented correctly. Adaptec calls them "host RAID".

See also FakeRaidHowto @ Community Ubuntu Documentation for more information.

Despite the terminology, "fake RAID" via dmraid is a robust software RAID implementation that offers a solid system to mirror or stripe data across multiple disks with negligible overhead for any modern system. dmraid is comparable to mdraid (pure Linux software RAID) with the added benefit of being able to completely rebuild a drive after a failure before the system is ever booted. However, be aware that not all BIOS RAID implementations support drive rebuilding. Instead they rely on non-linux software to perform the rebuild. If your system cannot rebuild a drive in the BIOS RAID setup utility, you are strongly encouraged to use mdraid (pure Linux Software Raid via mdadm - see RAID) instead of dmraid or you will find yourself unable to rebuild an array in case of a drive failure - or unable to retrieve information from your array in case of a motherboard failure without a lot of additional work.

History

In Linux 2.4, the ATARAID kernel framework provided support for fake RAID (software RAID assisted by the BIOS). For Linux 2.6 the device-mapper framework can, among other nice things like LVM and EVMS, do the same kind of work as ATARAID in 2.4. Whilst the new code handling the RAID I/O still runs in the kernel, device-mapper is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.

Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) fake RAID IDE/SATA controllers which contain BIOS functions. Common examples include: Promise FastTrak controllers; HighPoint HPT37x; Intel Matrix RAID; Silicon Image Medley; and NVIDIA nForce.

Supported hardware

  • Tested with ICH10R on 2009.08 (x86_64) -- pointone 23:10, 29 November 2009 (EST)
  • Tested with Sil3124 on 2009.02 (i686) -- loosec
  • Tested with nForce4 on Core Dump (i686 and x86_64) -- loosec
  • Tested with Sil3512 on Overlord (x86_64) -- loosec
  • Tested with nForce2 on 2011.05 (i686) -- Jere2001; drankinatty
  • Tested with nVidia MCP78S on 2011.06 (x86_64) -- drankinatty
  • Tested with nVidia CK804 on 2011.06 (x86_64) -- drankinatty
  • Tested with AMD Option ROM Utility using pdc_adma on 2011.12 (x86_64)

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: The installation steps do not reflect the current ArchLinux installation procedure. Need to be updated. Btw, it appears that Intel now recommends mdadm instead of dmraid (see Discussion). Update in progress. (Discuss in Talk:Installing with Fake RAID#)

Preparation

Warning: Backup all data before playing with RAID. What you do with your hardware is only your own fault. Data on RAID stripes is highly vulnerable to disc failures. Create regular backups or consider using mirror sets. Consider yourself warned!
  • Open up any needed guides (e.g. Beginners' guide, Installation guide) on another machine. If you do not have access to another machine, print it out.
  • Download the latest Arch Linux install image.
  • Backup all important files since everything on the target partitions will be destroyed.

Configure RAID sets

Warning: If your drives are not already configured as RAID and Windows is already installed, switching to "RAID" may cause Windows to BSOD during boot.[1]
  • Enter your BIOS setup and enable the RAID controller.
    • The BIOS may contain an option to configure SATA drives as "IDE", "AHCI", or "RAID"; ensure "RAID" is selected.
  • Save and exit the BIOS setup. During boot, enter the RAID setup utility.
    • The RAID utility is usually either accessible via the boot menu (often F8, F10 or CTRL+I) or whilst the RAID controller is initializing.
  • Use the RAID setup utility to create preferred stripe/mirror sets.
Tip: See your motherboard documentation for details. The exact procedure may vary.

Boot the installer

See Installation guide#Pre-Installation for details.

Load dmraid

Load device-mapper and find RAID sets:

# modprobe dm_mod
# dmraid -ay
# ls -la /dev/mapper/
Warning: Command "dmraid -ay" could fail after boot to Arch linux Release: 2011.08.19 as image file with initial ramdisk environment does not support dmraid. You could use an older Release: 2010.05. Note that you must correct your kernel name and initrd name in grubs menu.lst after installing as these releases use different naming

Example output:

/dev/mapper/control            <- Created by device-mapper; if present, device-mapper is likely functioning
/dev/mapper/sil_aiageicechah   <- A RAID set on a Silicon Image SATA RAID controller
/dev/mapper/sil_aiageicechah1  <- First partition on this RAID Set

If there is only one file (/dev/mapper/control), check if your controller chipset module is loaded with lsmod. If it is, then dmraid does not support this controller or there are no RAID sets on the system (check RAID BIOS setup again). If correct, then you may be forced to use software RAID (this means no dual-booted RAID system on this controller).

If your chipset module is NOT loaded, load it now. For example:

# modprobe sata_sil

See /lib/modules/`uname -r`/kernel/drivers/ata/ for available drivers.

To test the RAID sets:

# dmraid -tay

Perform traditional installation

Switch to tty2 and start the installer:

# /arch/setup

Partition the RAID set

  • Under Prepare Hard Drive choose Manually partition hard drives since the Auto-prepare option will not find your RAID sets.
  • Choose OTHER and type in your RAID set's full path (e.g. /dev/mapper/sil_aiageicechah). Switch back to tty1 to check your spelling.
  • Create the proper partitions the normal way.
Tip: This would be a good time to install the "other" OS if planning to dual-boot. If installing Windows XP to "C:" then all partitions before the Windows partition should be changed to type [1B] (hidden FAT32) to hide them during the Windows installation. When this is done, change them back to type [83] (Linux). Of course, a reboot unfortunately requires some of the above steps to be repeated.

Mounting the filesystem

If -- and this is probably the case -- you do not find your newly created partitions under Manually configure block devices, filesystems and mountpoints:

  • Switch back to tty1.
  • Deactivate all device-mapper nodes:
# dmsetup remove_all
  • Reactivate the newly-created RAID nodes:
# dmraid -ay
# ls -la /dev/mapper
  • Switch to tty2, re-enter the Manually configure block devices, filesystems and mountpoints menu and the partitions should be available.
Warning: NEVER delete a partition in cfdisk to create 2 partitions with dmraid after Manually configure block devices, filesystems and mountpoints have been set. (really screws with dmraid metadata and existing partitions are worthless) Solution: delete the array from the bios and re-create to force creation under a new /dev/mapper ID, reinstall/repartition.

Install and configure Arch

Tip: Utilize three consoles: the setup GUI to configure the system, a chroot to install GRUB, and finally a cfdisk reference since RAID sets have weird names.
  • tty1: chroot and grub-install
  • tty2: /arch/setup
  • tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID set
Leave programs running and switch to when needed.

Re-activate the installer (tty2) and proceed as normal with the following exceptions:

  • Select Packages
    • Ensure dmraid is marked for installation
  • Configure System
    • Add dm_mod to the MODULES line in mkinitcpio.conf. If using a mirrored (RAID 1) array, additionally add dm_mirror
    • Add chipset_module_driver to the MODULES line if necessary
    • Add dmraid to the HOOKS line in mkinitcpio.conf; preferably after sata but before filesystems

install bootloader

Use GRUB2

Please read GRUB2 for more information about configuring GRUB2. Currently, the latest version of grub-bios is not compatiable with fake-raid. If you got an error like this when you run grub-install:

 $ grub-install /dev/mapper/sil_aiageicechah
 Path `/boot/grub` is not readable by GRUB on boot. Installation is impossible. Aborting.

You could try an old version of grub. Check AUR for available packages.

1. download an old version package for grub

2. install these old version packages by using "pacman -U *.pkg.tar.xz"

3. (Optional) Install os-prober if you have other OS like windows.

4. $ grub-install /dev/mapper/sil_aiageicechah

5. $ grub-mkconfig -o /boot/grub/grub.cfg

6. (Optional) put grub2-bios, grub2-common in /etc/pacman.conf's IgnorePkg array, if you don't want pacman upgrade it.

That's all, grub-mkconfig will generate the configure automatically. You could edit /etc/default/grub to modify the configure (timeout, color, etc) before grub-mkconfig.

Troubleshooting

Booting with degraded array

One drawback of the fake RAID approach on GNU/Linux is that dmraid is currently unable to handle degraded arrays, and will refuse to activate. In this scenario, one must resolve the problem from within another OS (e.g. Windows) or via the BIOS/chipset RAID utility.

Alternatively, if using a mirrored (RAID 1) array, users may temporarily bypass dmraid during the boot process and boot from a single drive:

  1. Edit the kernel line from the GRUB menu
    1. Remove references to dmraid devices (e.g. change /dev/mapper/raidSet1 to /dev/sda1)
    2. Append disablehooks=dmraid to prevent a kernel panic when dmraid discovers the degraded array
  2. Boot the system

Error: Unable to determine major/minor number of root device

If you experience a boot failure after kernel update where the boot process is unable to determine major/minor number of root device, this might just be a timing problem (i.e. dmraid -ay might be called before /dev/sd* is fully set up and detected). This can effect both the normal and LTS kernel images. Booting the 'Fallback' kernel image should work. The error will look something like this:

Activating dmraid arrays...
no block devices found
Waiting 10 seconds for device /dev/mapper/nvidia_baaccajap5
Root device '/dev/mapper/nvidia_baaccajap5' doesn't exist attempting to create it.
Error: Unable to determine major/minor number of root device '/dev/mapper/nvidia_baaccajap5'

To work around this problem:

  • boot the Fallback kernel
  • insert the 'sleep' hook in the HOOKS line of /etc/mkinitcpio.conf after the 'udev' hook like this:
HOOKS="base udev sleep autodetect block dmraid filesystems"
  • rebuild the kernel image and reboot

dmraid mirror fails to activate

Does everything above work correctly the first time, but then when you reboot dmraid cannot find the array?

This is because Linux software raid (mdadm) has already attempted to mount the fakeraid array during system init and left it in an umountable state. To prevent mdadm from running, move the udev rule that is responsible out of the way:

# cd /lib/udev/rules.d
# mkdir disabled
# mv 64-md-raid.rules disabled/
# reboot

No block devices for partitions on existing RAID array

If your existing array, set up before attempting to install arch, appears in /dev/mapper/raidnamehere, but does not have any partitions (raidnamehere1, etc) re-check the status of your RAID partitions.

Arch may not create block devices for partitions that work in another OS if there are certain, even minor, problems.

gparted is useful to diagnose and repair most problems. Unfortunately, you may have to repartition from scratch.

See also