Difference between revisions of "Installing with Fake RAID"

From ArchWiki
Jump to: navigation, search
m (Old Manual Way)
m (Install dmraid)
Line 47: Line 47:
  
 
=== Install dmraid===
 
=== Install dmraid===
*Load device-mapper and chipsed driver modules, install dmraid package and find RAID Sets:
+
*Load device-mapper and chipset driver modules, install dmraid package and find RAID Sets:
  
 
<pre>
 
<pre>
Line 64: Line 64:
 
</pre>
 
</pre>
  
If there is only one file: ''control'' in /dev/mapper/ check if your chipset module was loaded with <code> lsmod</code> if it is then dmraid does not support this controller or there are no RAID Sets on the system (check RAID BIOS Setup again). If you did everything correct then your current options are to use [[Installing_with_Software_RAID_or_LVM|Software RAID]], this means no dualbooted RAID system, or to get a supported controller.
+
If there is only one file: ''control'' in /dev/mapper/ check if your chipset module is loaded with <code>lsmod</code> if it is then dmraid does not support this controller or there are no RAID Sets on the system (check RAID BIOS Setup again). If you did everything correct then your current options are to use [[Installing_with_Software_RAID_or_LVM|Software RAID]], this means no dualbooted RAID system on this controller.
  
 
*Switch to '''tty2''' and start the installer:
 
*Switch to '''tty2''' and start the installer:
Line 73: Line 73:
 
*Choose OTHER and type in your RAID Sets full path.
 
*Choose OTHER and type in your RAID Sets full path.
 
:This is where you switch back to '''tty:1''' to check your spelling. =)
 
:This is where you switch back to '''tty:1''' to check your spelling. =)
:Also if you only have one set per chipset you could enter: <code>/dev/mapper/sil*</code>
+
:If you only have one set per chipset you could enter: <code>/dev/mapper/sil*</code>
  
 
=== Partition the RAID Set===
 
=== Partition the RAID Set===

Revision as of 22:59, 30 June 2008


General Description

The purpose of this install is to enable use of a RAID Set created by the onboard BIOS RAID controller and thereby opening up the possibility to dualboot Windows from a partition on the RAID Set.

History

In linux 2.4 the ATARAID kernel framework provided support for Fake Raid (software RAID assisted by the BIOS). For linux 2.6 the device-mapper framework can do, among other nice things like LVM and EVMS, the same kind of work as ATARAID in 2.4. While the new code handling the RAID's I/O still runs in the kernel, device-mapper stuff is generally configured by a userspace application. It was clear that when using the device-mapper for RAID, detection would go to userspace.

Heinz Maulshagen created the dmraid tool to detect RAID sets and create mappings for them. The controllers supported are (mostly cheap) Fake-RAID IDE / SATA controllers which have BIOS functions on it. Most common ones are: Promise Fasttrack controllers as well as HPT 37x, Intel, VIA and LSI. Also serial ata RAID controllers like Silicon Image Medley and Nvidia Nforce are supported by the program.

  • Tested with nforce4-chipset on Core Dump i686 and x86_64. Works with dualbooted Windows XP.
  • Tested with sil3512-chipset on Overlord x86_64.
For more info on supported hardware check gentoo-guide down low.

Backup

Backup all data before playing with RAID, what you do with your hardware is only your own fault. Data on RAID Stripes are highly vulnerable to disk failures, create regular backups.

New Improved ISOs

Outline

  • Preparation
  • Boot ArchLive
  • Install dmraid
  • Use installer as normal
  • Chroot and install GRUB
  • Reboot

Procedure

Preparation

  • Print out the guides you need.
  • Get the latest core image of Archlinux.
  • Backup all important files since everything on the target disks will be destroyed.
  • Reboot

Setup RAID Sets

  • Enter BIOS Setup and enable the proper RAID Controllers and Channels.
  • Save and Exit BIOS.
  • Press F11 or similar to choose boot-menu
  • Enter RAID BIOS Setup utility and create preferred Stripe / Mirror Sets.

Boot the installer

If your screen can handle it, consider adding vga=795 as a boot option, there are some long lines involved here.

boot: arch vga=795

Install dmraid

  • Load device-mapper and chipset driver modules, install dmraid package and find RAID Sets:
# modprobe dm_mod sata_sil
# pacman -U /src/core/pkg/dmraid-*
# dmraid -ay
# ls -l /dev/mapper/

Example:

/dev/mapper/control            <-- Created by device-mapper
/dev/mapper/sil_aiageicechah   <-- A RAID set on a Silicon Image chipset
/dev/mapper/sil_aiageicechah1  <-- First partition on this RAID set

If there is only one file: control in /dev/mapper/ check if your chipset module is loaded with lsmod if it is then dmraid does not support this controller or there are no RAID Sets on the system (check RAID BIOS Setup again). If you did everything correct then your current options are to use Software RAID, this means no dualbooted RAID system on this controller.

  • Switch to tty2 and start the installer:
 # /arch/setup 
  • Under Prepare Hard Drive Choose 2: Partition Hard Drives since the Auto-Prepare option will not find your RAID Sets.
  • Choose OTHER and type in your RAID Sets full path.
This is where you switch back to tty:1 to check your spelling. =)
If you only have one set per chipset you could enter: /dev/mapper/sil*

Partition the RAID Set

  • Create the proper partitions
Note: Now would be a good time to install the other OS since this most likely is the plan. If installing Windows XP to C: then the boot partition should be changed to type: hidden fat32 [1B] to hide it during the Windows installation and then changed back to type: linux [83] for GRUB.
Of course a reboot unfortunately requires some of the above steps to be repeated.

If you do not find your newly created partitions under Set Filesystem Mountpoints you want to do the following:

  • Switch to tty:1 and
  • Deactivate all device-mapper nodes
  • Reactivate the newly created RAID nodes
# dmsetup remove_all
# dmraid.static -ay
# ls -la /dev/mapper
  • Now go back to the installer and they should show up.

Example setup for dualboot:

/dev/mapper/raid_set   <-- This is where we install GRUB
/dev/mapper/raid_set1  <-- /boot
/dev/mapper/raid_set3  <-- c:\ in Windows
/dev/mapper/raid_set4  <-- /
/dev/mapper/raid_set5  <-- swap

Install and Configure Arch

For instance use three consoles; the setup gui to configure the system, chroot to install grub and finally a cfdisk reference since RAID sets have weird names.

tty1: chroot and grub
tty2: /arch/setup
tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID Set. Leave it running and switch to when needed.

  • Go back to the installer tty2 and proceed as normal.
It is generally a good idea to have tty:3 with a running instance of cfdisk to see what should be mounted where.
  • Choose what packages to install and install them.
  • Be SURE to check dmraid!
  • Configure system, yes to use hwdetect and everything elso no unless you want it.
  • Add dm_mod and your chipset module to MODULES line in mkinitcpio.conf
  • Add dmraid to HOOKS line in mkinitcpio.conf
  • Exit menu and make sure dmraid is one of the hooks
If you missed anything, go back to config menu and choose NO on hwdetect to keep your configs.

Install GRUB

  • Install bootloader
In menu.lst I had to enter (hd0,0) after root on two lines, don't know why this was not already there.
We are not installing on software RAID or LVM.
  • Right about here the GRUB installer will fail. However it will still copy files to /boot.
  • Switch to tty1 and chroot into our installed system:
# mount -o bind  /dev /mnt/dev
# mount -t proc  none /mnt/proc
# mount -t sysfs none /mnt/sys
# chroot /mnt/ /bin/bash/
  • Switch to tty3 and look up the geometry of our RAID Set.
The number of Cylinders, Heads and Sectors on the RAID Set should be written up high inside cfdisk.
Example: 18079 255 63 for a RAID Stripe of two 74GB Raptor discs.
Example: 38914 255 63<>code> for a RAID Stripe of two 160GB laptop disks
Grub does not understand this and needs to be told with 'geometry'.
  • Switch to tty1, the chrooted environment.
  • Install GRUB on <code>/dev/mapper/raid_set
Exchange C H S below with the proper numbers and be aware that they are not entered in the same order as they are read from cfdisk.
# grub --device-map=/dev/null
grub> device (hd0) /dev/mapper/raid_set
grub> geometry (hd0) C H S

If geometry is entered properly, GRUB will spit out a list of partitions found on this RAID Set.

grub> root (hd0,0)
grub> setup (hd0)
grub> quit
# reboot

And now we are done. =)

The Old Way

Keeping this intact for historical/informational reasons and those who have only old ISOs.

Outline

  • Prepare
  • Boot the Installer CD
  • Install dmraid
  • Partition the RAID Set
  • Install and Configure Arch
  • Reboot

Procedure

Prepare Install Media

Setup RAID Sets

  • Enter BIOS Setup and enable the proper RAID Controllers and Channels.
  • Set CD-drive as First Boot Priority.
  • Save and Exit BIOS.
  • Enter RAID BIOS Setup utility and create preferred Stripe / Mirror Sets.

Boot the installer CD

If your screen can handle it, consider adding vga=795 as a boot option, there are some long lines involved here.

boot: arch vga=795

Install dmraid

Assuming dmraid package is added to the installer CD:s root and the CD is /dev/sr0

  • Mount CD, install package and find RAID Sets:
# mount /dev/sr0 /src
# pacman -U /src/dmraid-*
# dmraid.static -ay
# ls -l /dev/mapper/

Example:

/dev/mapper/control           <-- Created by device-mapper
/dev/mapper/nvidia_geceiece   <-- The whole RAID set.
/dev/mapper/nvidia_geceiece1  <-- First partition on this RAID set

If there is no /dev/mapper/ directory you will have to load the required modules before running "dmraid -ay".

If there is only one file control in /dev/mapper/ then dmraid does not support this controller or there are no RAID Sets on the system. Current options are to use Software RAID, this means no dualbooted RAID system, or to get a supported controller.

Partition the RAID Set

  • Create the proper partitions
  • Deactivate all RAID devices
  • Reactivate the newly created RAID devices
Note: Now would be a good time to install the other OS since this most likely is the plan. If installing Windows XP to c: then the boot partition should be changed to type: hidden fat32 [1B] during the Windows installation and then changed back to type: linux [83] for GRUB.
Of course a reboot unfortunately requires some of the above steps to be repeated.
# cfdisk /dev/mapper/raid_set
# dmsetup remove_all
# dmraid.static -ay
# ls -la /dev/mapper

Example setup for dualboot:

/dev/mapper/raid_set   <-- This is the whole RAID set.
/dev/mapper/raid_set1  <-- /boot
/dev/mapper/raid_set2  <-- c:\ in windows
/dev/mapper/raid_set3  <-- /
/dev/mapper/raid_set5  <-- swap
/dev/mapper/raid_set6  <-- /tmp
/dev/mapper/raid_set7  <-- /home
/dev/mapper/raid_set8  <-- d:\ in windows

Install and Configure Arch

For instance use three consoles; the quickinstall script to get all files on the system, the setup gui to configure the system and chroot to install grub and finally a cfdisk reference since RAID sets have weird names.

tty1: /arch/quickinstall, chroot and grub
tty2: /arch/setup
tty3: cfdisk for a reference in spelling, partition table and geometry of the RAID Set. Leave it running and switch to when needed.

Create and Mount File Systems

  • Switch to tty3 and run cfdisk.
# cfdisk /dev/mapper/raid_set
  • Switch to tty1, create file systems and mount them.
# mkswap /dev/mapper/raid_set5
# mkfs.ext2 /dev/mapper/raid_set1
# mkfs.ext2 /dev/mapper/raid_set6
# mkfs.jfs /dev/mapper/raid_set3
# mkfs.jfs /dev/mapper/raid_set7

# mount /dev/mapper/raid_set3 /mnt/

# mkdir /mnt/boot /mnt/tmp /mnt/home

# mount /dev/mapper/raid_set1 /mnt/boot/
# mount /dev/mapper/raid_set6 /mnt/tmp/
# mount /dev/mapper/raid_set7 /mnt/home/

Install Packages

  • Run the quickinstall script
# /arch/quickinstall cd /mnt/ /src/core/pkg/
  • Prepare for dmraid and chrooted environment
# mount -o bind  /dev /mnt/dev
# mount -t proc  none /mnt/proc
# mount -t sysfs none /mnt/sys
  • Install dmraid
This should be done before chrooting into /mnt/ so as not to get problems with file system sizes.
# pacman --root /mnt/ -U /src/dmraid-*
# chroot /mnt/ /bin/bash/
  • Update nodes for GRUB
GRUB will have trouble finding /dev/mapper/raid_set in the chrooted environment unless the device-mappers nodes are updated.
# dmsetup mknodes

Setup Arch

  • Switch to tty2 and start the regular installer.
# /arch/setup
  • Go directly to the System configuration menu and let the installer use it's autodetect features.
It is not necessary to enable any software raid, encryption or lvm features for dmraid to work.
  • Edit /etc/mkinitcpio.conf and add dmraid to the HOOKS line.
It is probably a good idea to do this right after sata. Remember to check that the correct disc controller driver was found by autoconfig and put inside MODULES.
  • Save your changes and repeat with /etc/mkinitcpio.d/kernel26-fallback.conf.
  • Edit any other settings and exit the Configuration menu.
As the kernel is regenerated check for Running hook [dmraid] and see that it installs properly.

Note: Every time the Configuration menu is entered, the autoconfiguration process overwrites mkinitcpio.conf and the fallback config. They need to be re-edited every time.

Install GRUB

  • Choose Install Boot Loader from the Archlinux Setup menu and choose GRUB.
  • Find and choose /dev/mapper/raid_set.
  • Edit /boot/grub/menu.lst to point to your RAID set's root partition.
Change root=/dev/sda3 to root=/dev/mapper/raid_set3
We are not installing on software RAID if the installer asks.
  • The installation will fail, but not before it has copied all necessary files to /boot/.
  • Do not exit the gui installer at this point!
This will unmount the file systems we previously mounted for dmraid install and chroot. Somewhat annoying since they are still needed.
  • Switch to tty3 and look up the geometry of our RAID Set.
The number of Cylinders, Heads and Sectors on the RAID Set should be written up high inside cfdisk.
Example: 18079 255 63 for a RAID Stripe of two 74GB Raptor discs.
Grub does not understand this and needs to be told with 'geometry'.
  • Switch to tty1, the chrooted environment.
  • Install GRUB on /dev/mapper/raid_set
Exchange C H S below with the proper numbers and be aware that they are not printed in the same order as they are read from cfdisk.
# grub --device-map=/dev/null
grub> device (hd0) /dev/mapper/raid_set
grub> geometry (hd0) C H S

If geometry is entered properly, GRUB will spit out a list of partitions found on this RAID Set.

grub> root (hd0,0)
grub> setup (hd0)
grub> quit
  • Add mountpoints to /etc/fstab
# echo"/dev/mapper/raid_set3  /	     jfs   defaults  0  1" >> /etc/fstab
# echo"/dev/mapper/raid_set5  swap   swap  defaults  0  0" >> /etc/fstab
# echo"/dev/mapper/raid_set1  /boot  ext2  defaults  0  0" >> /etc/fstab
# echo"/dev/mapper/raid_set6  /tmp   ext2  defaults  0  0" >> /etc/fstab
# echo"/dev/mapper/raid_set7  /home  jfs   defaults  0  2" >> /etc/fstab

Reboot

  • Exit the chrooted environment.
  • Exit installer
  • Quit cfdisk.
  • Unmount the file systems.
  • Exit all shells.
  • Reboot and remove installer CD.
# umount /mnt/*
# umount /mnt
# reboot

External Links

Bug report on adding dmraid to the install CD: http://bugs.archlinux.org/task/4762
Forum thread that lead to this guide: http://bbs.archlinux.org/viewtopic.php?id=22038
Gentoo Wiki page that greatly contributed: http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Bios_(Onboard)_RAID