https://wiki.archlinux.org/api.php?action=feedcontributions&user=Tiny&feedformat=atomArchWiki - User contributions [en]2024-03-29T10:08:22ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Install_Arch_Linux_from_existing_Linux&diff=92167Install Arch Linux from existing Linux2010-01-14T12:46:17Z<p>Tiny: /* Root device '/dev/sd??' doesn't exist */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]][[Category:HOWTOs (English)]]<br />
<br />
{{Article summary start}}<br />
{{Article summary heading|Available in languages}}<br />
{{i18n_entry|Українська|Install from Existing Linux (Українська)}}<br />
{{Article summary end}}<br />
<br />
This guide is intended to combine and update the three previously existing and highly similar alternative install guides on this wiki. This guide is intended for anybody who wants to install Arch Linux from any other running Linux -- be it off a LiveCD or a pre-existing install of a different distro.<br />
<br />
==Overview==<br />
Arch Linux's [[Pacman]] can be configured (-r) to perform operations in any directory you like, using that as the context of "root" while running. <br />
<br />
The pacman package available from the mirrors includes a static-linked version of the binary (pacman.static) which should run in most any modern linux environment, without the need of dependencies. <br />
<br />
This is useful for building up new Arch Linux systems from scratch from other LiveCDs or different systems running another Linux environment, creating new chroot environments on a "host" system, maintaining a "golden-master" for development & distribution, or other fun topics like rootfs-over-NFS for diskless machines. <br />
<br />
In the case of an x86_64 host, it is even possible to use i686-pacman to build a 32-bit chroot environment. See [[Arch64 Install bundled 32bit system]]. <br />
<br />
Throughout this guide, we will refer to partitions as /dev/hdxx or /dev/sdxx. This refers to whatever dev entry you have on your system for the partition in question. The convention is:<br />
Drive 1, Partition 1: /dev/hda1 or /dev/sda1<br />
Drive 1, Partition 2: /dev/hda2 or /dev/sda2<br />
Drive 2, Partition 1: /dev/hdb1 or /dev/sdb1<br />
etc...<br />
<br />
We will refer to it as /dev/sdxx whenever possible, but realize depending on your system it could be /dev/hdxx.<br />
<br />
If you have a broadband connection available through the installation process and just want to do a basic install, [http://unetbootin.sourceforge.net/ UNetbootin] may also be an easy solution worth to have a look at.<br />
<br />
==Setup the host system== <br />
<br />
You need to install the archlinux package manager pacman on your host linux environment. In addition you will need a list of pacman mirrors sites which is used to download data on available packages as well as the packages themselves.<br />
<br />
===Get the required packages===<br />
<br />
You need to get the required packages for your host linux environment. The examples given here assume you are using a i686 environment. '''If you are running on an 64bit linux instead you should replace each occurance of "i686" with "x86_64".'''<br />
<br />
All version numbers given here may change. Please check the version numbers the packages are at first and note them down. The version numbers can be found [http://www.archlinux.org/packages/core/i686/pacman/ here for pacman] and [http://www.archlinux.org/packages/core/i686/pacman-mirrorlist/ here for pacman-mirrorlist]. Once you are sure of the version numbers download the required packages:<br />
mkdir /tmp/archlinux<br />
cd /tmp/archlinux<br />
wget ftp://ftp.archlinux.org/core/os/i686/pacman-\*.pkg.tar.gz<br />
tar xzvf pacman-*.pkg.tar.gz<br />
<br />
In addition to the dynamically linked pacman there is a statically linked version available. This does integrate better into all the different host linux systems possible. This static version is not part of the normal archlinux setup anymore but can be found at http://repo.archlinux.fr/i686/. Use the following command to get it:<br />
cd /tmp/archlinux<br />
wget http://repo.archlinux.fr/i686/pacman-static-3.2.2-1.pkg.tar.gz<br />
tar xzvf pacman-static-3.2.2-1.pkg.tar.gz<br />
<br />
===Install the required files onto the host system===<br />
<br />
Since we will use pacman.static for the initial setup, we only need a few files installed into the host. This can be done by running the following commands as root:<br />
cp /tmp/archlinux/etc/pacman.conf /etc<br />
mkdir /etc/pacman.d<br />
cp /tmp/archlinux/etc/pacman.d/* /etc/pacman.d<br />
cp /tmp/archlinux/usr/bin/pacman.static /usr/bin<br />
<br />
If you do not mind to litter your install host, you can also extract all the downloaded tar balls into your root directory by running as root:<br />
cd /<br />
for f in /tmp/archlinux/pacman-*pkg.tar.gz<br />
tar xzf $f<br />
done<br />
<br />
You may also turn these tarballs into packages for your distribution with the [http://kitenet.net/~joey/code/alien/ alien] tool. See the man page of the tool for instructions. The packages created that way may be installed into your host distribution using the usual package management tools available there. This approach offers the best integration into the host linux environment. For a debian package based system this is done with the following commands:<br />
cd /tmp/archlinux<br />
alien -d pacman-3.3.0-3-i686.pkg.tar.gz<br />
alien -d pacman-mirrorlist-20090108-1-i686.pkg.tar.gz<br />
alien -d pacman-static-3.2.2-1-i686.pkg.tar.gz<br />
<br />
RPM based systems will need to replace the parameter "-d" with "-r".<br />
<br />
These distribution packages can then get installed using the normal package management tools of the host linux environment.<br />
<br />
===Configure the host system===<br />
<br />
Configure your /etc/pacman.conf to your liking, and remove unnecessary mirrors from /etc/pacman.d/mirrorlist. Also, enabling at least a few mirrors might become necessary, as you may experience errors during syncing if you have no mirror set. You may want to manually resolve DNS in the /etc/pacman.d/mirrorlist, because pacman-static for i686 may not be able to get address information on x86_64 systems.<br />
<br />
If you're installing from a LiveCD, and you have a system with a low amount of combined RAM and swap (< 1 GB), be sure to set the cachedir in /etc/pacman.conf to be in new arch partition (e.g. - /newarch/var/cache/pacman/pkg). Otherwise you could exhaust memory between the overhead of the existing distro and downloading necessary packages to install.<br />
<br />
==Setup the target system==<br />
<br />
===Prepare disk for Arch===<br />
<br />
Prepare the new Arch system's filesystems and then mount them. If your host system has any gui tools for this, such as gparted, cfdisk, or mandrakes diskdrake, feel free to use them. <br />
<br />
To format a partition as ext3, you run (where /dev/sdxx is the partition you want to setup): <br />
# mkfs.ext3 /dev/sdxx <br />
as reiserfs: <br />
# mkreiserfs /dev/sdxx <br />
swap: <br />
# mkswap /dev/sdxx <br />
Most other filesystems can be setup with their own mkfs variant, take a look with tab completion. Available filesystems depend entirely on your host system. <br />
<br />
Once you have setup the filesystems, mount them. Throughout this, We will reference the new Arch / at /newarch, however you can put it wherever you like. <br />
# mkdir /newarch <br />
# mount /dev/sdxx /newarch <br />
<br />
It is also possible to build the root filesystem in a normal directory on the install-host, for transfer to a target system over the network, or to create a master tarball etc. <br />
<br />
===Install the core===<br />
<br />
Update pacman, you may have to create the /newarch/var/lib/pacman folder for it to work:<br />
# mkdir -p /newarch/var/lib/pacman <br />
<br />
Install the 'base' group of packages:<br />
# pacman.static -Sy base -r /newarch<br />
<br />
'''NOTE:''' Pacman cache directory is not modified by -r parameter. If you don't want cache to be created in pre-existing distro use --cachedir or modify pacman.conf as mentioned in [[Install_From_Existing_Linux#Setup_host_system|host system setup]]!<br />
<br />
===Prepare the system===<br />
<br />
First, ensure the correct /dev nodes have been made for udev:<br />
ls -alF /newarch/dev<br />
<br />
This result in a list containing lines similar to the following (the dates will differ for you):<br />
crw------- 1 root root 5, 1 2008-12-27 21:40 console<br />
crw-rw-rw- 1 root root 1, 3 2008-12-27 21:42 null<br />
crw-rw-rw- 1 root root 1, 5 2008-12-27 21:40 zero<br />
<br />
Delete and recreate any device which has a different set of permissions (the crw-... stuff plus the two root entries) and major/minor numbers (the two before the date).<br />
<br />
cd /newarch/dev <br />
rm console ; mknod -m 600 console c 5 1 <br />
rm null ; mknod -m 666 null c 1 3 <br />
rm zero ; mknod -m 666 zero c 1 5<br />
<br />
All device nodes should have been created for you already with the right permissions and you should not need to recreate any of them.<br />
<br />
Mount various filesystems into the new Arch system. <br />
mount -o bind /dev /newarch/dev<br />
mount -t proc none /newarch/proc<br />
mount -o bind /sys /newarch/sys<br />
<br />
In order for DNS to work properly you need to edit /newarch/etc/resolv.conf or replace it with the resolv.conf from your running distribution <br />
cp /etc/resolv.conf /newarch/etc/ <br />
<br />
Copy your pacman mirror list into the new system:<br />
cp /etc/pacman.d/mirrorlist /newarch/etc/pacman.d<br />
<br />
Make sure that it is set up correctly.<br />
<br />
Chroot into the new system <br />
chroot /newarch /bin/bash<br />
<br />
===Install the rest===<br />
<br />
Install your preferred kernel, and any other packages you may wish to install. <br />
For the default kernel (which is already installed!):<br />
pacman -S kernel26 <br />
<br />
If you wish to install extra packages now, you may do so with: <br />
pacman -S packagename<br />
<br />
===Configure the target system===<br />
Edit your /etc/fstab, remembering to add /, swap and any other partitions you may wish to use. Be sure to use the /dev/sd* (sda1, sda2, sdb1, etc) for the partitions instead of /dev/hd*, as Arch uses the sdxx convention for all drives.<br />
<br />
Edit your rc.conf to your liking <br />
<br />
Edit /etc/locale.gen, uncommenting any locales you wish to have available, and build the locales <br />
locale-gen<br />
<br />
===Setup Grub=== <br />
Allow grub-install to run properly while chrooted:<br />
grep -v rootfs /proc/mounts > /etc/mtab <br />
<br />
Also, if you want to keep grub from your existing install, you may use grub-install from the Arch chroot, then redo a grub-install from your existing installation. If grub-install fails, you can manually install:<br />
grub <br />
grub> find /boot/grub/stage1 (You should see some results here if you have done everything right so far. If not, back up and retrace your steps.)<br />
grub> root (hd0,X) <br />
grub> setup (hd0) <br />
grub> quit <br />
<br />
Double check your /boot/grub/menu.lst when done if installing from a LiveCD. Depending on the host, it could need correcting from hda to sda, and a prefix of /boot as well in the paths. <br />
<br />
Instructions for [[GRUB]] and [[LILO]] are available on this wiki.<br />
<br />
===Finishing touches===<br />
See [[Beginners_Guide#Configure_the_System|Beginners Guide:Configure your System]]. You can ignore 2.11, but the rest of that guide should be of use to you in post-installation configuration of your system.<br />
<br />
Reboot to your new system!<br />
<br />
==Troubleshooting== <br />
<br />
===Kernel Panic===<br />
If when you reboot into your new system you get a kernel panic saying console couldn't open: <br />
kinit: couldn't open console, no such file... <br />
<br />
This means you didn't follow the instructions above. Follow the steps to create basic device nodes at the beginning of preparation.<br />
<br />
===Root device '/dev/sd??' doesn't exist=== <br />
If when you reboot into your new system you get a error messages like this: <br />
Root device '/dev/sda1' doesn't exist, attempting to create it... etc. <br />
<br />
This means the drives are showing up as "hda1" instead of "sda1" In which case change your GRUB or LILO settings to use "hd??" or try the following. <br />
<br />
Edit /etc/mkinitcpio.conf and change "ide" to "pata" in the "HOOKS=" line. Then regenerate your initrd. (Make sure you have chroot'ed into the new system first.) <br />
mkinitcpio -p kernel26<br />
<br />
If you are using LVM make sure you add "lvm2" in the HOOKS line. Regenerate your initrd as above.<br />
<br />
If you're installing to a device that needs PATA hook make sure it's located before autodetect hook in mkinitcpio.conf.<br />
<!-- vim: set ft=Wikipedia: --></div>Tinyhttps://wiki.archlinux.org/index.php?title=VirtualBox&diff=57619VirtualBox2009-01-12T12:58:16Z<p>Tiny: /* Using host interface networking (the Arch way) */</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|:VirtualBox}}<br />
{{i18n_entry|Italiano|:VirtualBox (Italiano)}}<br />
{{i18n_entry|简体中文|:VirtualBox (简体中文)}}<br />
{{i18n_entry|Русский|VirtualBox (Русский)}}<br />
{{i18n_entry|Español|VirtualBox (Español)}}<br />
{{i18n_entry|Português|VirtualBox (Português)}}<br />
{{i18n_links_end}}<br />
<br />
<br />
<br />
== What is VirtualBox ==<br />
[http://www.virtualbox.org VirtualBox] is a virtual pc emulator like vmware. It has many of the features vmware has, as well as some of its own.<br />
<br />
=== Editions ===<br />
VirtualBox is available in two editions: VirtualBox (OSE) and VirtualBox (Personal Use and Evaluation License (PUEL))<br />
<br />
==== VirtualBox (OSE) ====<br />
VirtualBox (OSE) is the open source version of VirtualBox, which can be found in the community repository. It lacks some features such as USB device support and the built-in RDP server.<br />
<br />
==== VirtualBox (PUEL) ====<br />
VirtualBox PUEL is a binary-only version (free for personal use) which is available from the [http://aur.archlinux.org/packages.php?ID=9753 AUR] or directly from the <br />
[http://www.virtualbox.org/wiki/Downloads VirtualBox] website. The PUEL edition offers the following advantages:<br />
<br />
*'''Remote Display Protocol (RDP) Server''' - a complete RDP server on top of the virtual hardware, allowing users to connect to a virtual machine remotely using any RDP compatible client<br />
<br />
*'''USB support''' - a virtual USB controller which allows USB 1.1 and USB 2.0 devices to be passed through to virtual machines<br />
<br />
*'''USB over RDP''' - a combination of the RDP server and USB support, allowing users to make USB devices available to virtual machines running remotely<br />
<br />
*'''iSCSI initiator''' - a builtin iSCSI initiator making it possible to use iSCSI targets as virtual disks without the guest requiring support for iSCSI<br />
<br />
== Installation ==<br />
<br />
=== Install VirtualBox (OSE) ===<br />
<br />
VirtualBox (OSE) is available from the standard repositories:<br />
<br />
# pacman -S virtualbox-ose<br />
<br />
'''Note:''' This package seems not to be in x84_64 Repositories.<br />
<br />
This will select by default <tt>virtualbox-ose</tt> and <tt>virtualbox-modules</tt> packages. Once installed, a desktop entry can be located in ''Applications > System Tools > VirtualBox OSE''<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
''('''Note:''' You must logout/login in order for this change to take effect)''<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup. For example:<br />
<br />
MODULES=(loop '''vboxdrv''' fuse ...)<br />
<br />
To load the module manually, run the following in a terminal as root: <br />
<br />
# modprobe vboxdrv<br />
<br />
'''HowTo:'''<br><br />
[[VirtualBox-HowTo]]<br />
<br />
=== Install VirtualBox PUEL (virtualbox_bin) ===<br />
VirtualBox PUEL is available from the [http://aur.archlinux.org/packages.php?ID=9753 AUR: virtualbox_bin].<br />
<br />
Download the tarball from the [http://aur.archlinux.org/packages.php?ID=9753 AUR: virtualbox_bin] page, unpack, run <tt>makepkg</tt>, and then as root:<br />
<br />
# pacman -U PACKAGE-NAME.pkg.tar.gz<br />
<br />
'''However, there's an alternative way to install the virtualbox_bin package:'''<br />
<br />
Firstly, add as root the followings into /etc/pacman.conf:<br />
[archlinuxfr]<br />
Server = <nowiki>http://repo.archlinux.fr/i686</nowiki><br />
or<br />
[archlinuxfr]<br />
Server = <nowiki>http://repo.archlinux.fr/x86_64</nowiki><br />
depending on your CPU's architecture.<br />
<br />
Then you can install it successfully via:<br />
# pacman -Sy virtualbox_bin<br />
<br />
<br />
<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
''('''Note:''' You must logout/login in order for this change to take effect)''<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup. For example:<br />
MODULES=(loop '''vboxdrv''' fuse ...)<br />
<br />
To load the module manually, run the following in a terminal as root: <br />
# modprobe vboxdrv<br />
<br />
==== Module Rebuilds ====<br />
Note that any time your kernel version changes (due to upgrade, recompile, etc.) you must also rebuild the VirtualBox kernel module by running '''vbox_build_module''' as root. This binary will be located in one of the following locations: <tt>/sbin</tt>, <tt>/bin</tt>, or <tt>/usr/bin</tt>. After rebuilding the module, don't forget to load it with: '''<code>modprobe vboxdrv</code>'''<br />
<br />
<br />
<br />
=== Install required QT libraries ===<br />
Currently, VirtualBox relies on qt4 for its graphical interface. If you require a GUI, ensure you have qt4 installed:<br />
# pacman -S qt<br />
<br />
=== Start VirtualBox ===<br />
To start Virtualbox, run the following command in a terminal:<br />
$ VirtualBox<br />
<br />
=== Install VirtualBox 2.1 (another alternative) ===<br />
<br />
VirtualBox.run install can be done using the All Distributions package from the [http://www.virtualbox.org/wiki/Linux_Downloads Linux section] of the VirtualBox Website.<br />
<br />
Make sure the Qt 4.3.0 and SDL 1.2.7 or higher packages are installed:<br />
<br />
# pacman -Sy qt sdl <br />
<br />
Download the appropriate architecture file i386/AMD64. In a terminal window, browse to the download folder and as root run:<br />
<br />
# sh VirtualBox-2.XXXX-Linux_ARCH.run<br />
<br />
This will install the package to the /opt/VirtualBox-2.XXX folder.<br />
<br />
After installation, a desktop entry can be located in ''Applications > System Tools > Sun xVM VirtualBox''<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup.<br />
<br />
Start the VirtualBox GUI either with the command:<br />
<br />
# VirtualBox <br />
<br />
or using the ''Applications'' desktop entry. In version 2.1.x, an installation wizard should start and take you through the process of setting up a virtual machine. Otherwise, use the help menu to get started. '''Continue reading to see the more advanced options and setups....'''<br />
<br />
== Configuration ==<br />
After we have installed VirtualBox on our system and added ourselves in the vboxusers group we can start configuring our system in order to make all the features of VirtualBox available to us.Create a new virtual machine using the wizard provided by the GUI and then click settings in order to edit the virtual machine settings.<br />
<br />
=== Keyboard and mouse between the host and the guest ===<br />
To capture the keyboard and mouse, click the mouse inside the Virtual Machine display.<br><br />
To uncapture, "Ctrl-Alt Delete".<br />
<br />
If [[Xorg]] freezes mouse and keyboard you will have to disable the [[Xorg#Input_hotplugging_with_xorg-server_1.5| new hot plugging feature of Xorg 1.5]] by adding in /etc/X11/xorg.conf:<br />
<br />
Section "ServerLayout"<br />
. . .<br />
Option "AutoAddDevices" "False"<br />
. . .<br />
EndSection<br />
<br />
This is needed for Linux guests in a Mac OS X or Windows host. TODO: not sure about Linux hosts.<br />
<br />
=== Getting network in the guest machine to work ===<br />
First let's get network working in the guest machine. Click the network tab. The not attached option means you'll have "Network cable unplugged" or similar error in the guest computer.<br />
<br />
==== Using NAT network ====<br />
This is the simplest way to get network. Select NAT network and it should be ready to use. Then, the guest operating system can be automatically configured by using DHCP.<br />
<br />
The NAT IP address on the first card is 10.0.2.0, 10.0.3.0 on the second and so on.<br />
<br />
==== Using host interface networking (the VirtualBox way) ====<br />
Since VirtuaBox 2.1.0 it has a native support for host interface networking. Just add '''vboxnetflt''' to your MODULES section in [[rc.conf]] and choose ''Host Interface Networking'' in the virtual machine configuration.<br />
<br />
==== Using host interface networking (the Arch way) ====<br />
You are going to just edit these files and reboot:<br />
<br />
* /etc/conf.d/bridges<br />
* /etc/rc.conf<br />
* /etc/vbox/interfaces<br />
<br />
Ready? Let's go!<br />
<br />
'''/etc/conf.d/bridges:'''<br />
bridge_br0="eth0 vbox0" # Put any interfaces you need.<br />
BRIDGE_INTERFACES=(br0)<br />
<br />
'''/etc/rc.conf:'''<br />
<br />
First add the bridge module to your MODULES line<br />
MODULES=( <your other modules> '''bridge''')<br />
<br />
Then, in your NETWORKING section, make the following changes:<br />
br0="dhcp" # Maybe you have some static configuration... I use DHCP.<br />
INTERFACES=(eth0 br0)<br />
<br />
'''Note''' by gpan:<br />
<br />
'''/etc/rc.conf:'''<br />
<br />
First add the vboxdrv (and [[vboxnetflt]] in case of 2.1.0 version) module to your MODULES line<br />
<br />
MODULES=( <your other modules> vboxdrv vboxnetflt )<br />
<br />
<br />
Next, you should edit your '''/etc/udev/rules.d/60-vboxdrv.rules''' and type:<br />
<br />
KERNEL=="vboxdrv", NAME="vboxdrv", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
<br />
Save it and exit.<br />
<br />
Then open terminal and type:<br />
<br />
# pacman -S bridge-utils uml_utilities<br />
<br />
Create a new bridge with this command:<br />
<br />
# brctl addbr br0<br />
<br />
<br />
'''/etc/vbox/interfaces'''<br />
<br />
(You can set up more interfaces if you want. Sky is the limit!):<br />
vbox0 your_user br0 # Be sure that your user is in the vboxusers group.<br />
<br />
Reboot and enjoy!<br />
<br />
'''''Note:''' Remember to set up your virtual machine with proper network configuration.''<br />
<br />
'''''Note:''' If you have any issue, make sure that you have the bridge-utils package installed and vboxnet daemon loaded''<br />
<br />
==== Using host interface networking (generic) ====<br />
This way is a bit harder, but it allows you to see the VirtualMachine as a "real" computer on your local network. You need to get bridge-utils <br />
<br />
# pacman -S bridge-utils uml_utilities<br />
<br />
'''Note''' by Sp1d3rmxn:<br />
:You also need to have the TUN module loaded...in [[rc.conf]] add "tun" (without the :quotes) to your MODULES section. For testing this out right now without rebooting :you can load the module from the command line by "modprobe tun".<br />
:<br />
:Then you MUST set these permissions otherwise you'll never get VBox to init the :interface. The command is "<code>chmod 666 /dev/net/tun</code>" (without the quotes).<br />
<br />
:Now proceed with the rest as it's written below.<br />
<br />
'''Note''' by Dharivs<br />
:As said by Sp1d3rmxn, we must set these permissions, but, instead of using the <br />
:command, we can set them in /etc/udev/rules.d/60-vboxdrv.rules, which will set them <br />
:on boot:<br />
KERNEL=="vboxdrv", NAME="vboxdrv", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
KERNEL=="tun", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
<br />
<br />
<b>1.</b> Create a new bridge with this command:<br />
# brctl addbr br0<br />
<br />
<b>2.</b> If you are not using DHCP, run ifconfig and note down the network configuration of your existing network interface (e.g. eth0), which we will need to copy to the bridge in a minute.<br />
<br />
''('''Note: You will need this settings so make sure you don't lose them!!!''')''<br />
<br />
<b>3.</b> Switch your physical network adapter to "promiscuous" mode so that it will accept Ethernet frames for MAC addresses other than its own (replace eth0 with your network interface):<br />
# ifconfig eth0 0.0.0.0 promisc <br />
<br />
''('''Note:''' You will lose network connectivity on eth0 at this point.)''<br />
<br />
<b>4.</b> Add your network adapter to the bridge:<br />
# brctl addif br0 eth0<br />
<br />
<b>5.</b> Transfer the network configuration previously used with your physical ethernet adapter to the new bridge. If you are using DHCP, this should work:<br />
# dhclient br0<br />
<br />
'''Note''' by Sp1d3rmxn:<br />
:Use "dhcpcd -t 30 -h yourhostname br0 &" instead of the above<br />
<br />
Otherwise, run <code>ifconfig br0 x.x.x.x netmask x.x.x.x</code> and use the values that you noted down previously.<br />
<br />
<b>6.</b> To create a permanent host interface called vbox0 (all host interfaces created in this way must be called vbox followed by a number) and add it to the network bridge created above, use the following command:<br />
VBoxAddIF vbox0 vboxuser br0<br />
<br />
Replace vboxuser with the name of the user who is supposed to be able to use the new interface.<br />
<br />
('''Note:''' VboxAddIF is located in /opt/VirtualBox-VERSION OF VIRTUALBOX/VBoxAddIF)<br />
<br />
Alternatively, you can [http://mychael.gotdns.com/blog/2007/05/31/virtualbox-bridging/ setup VirtualBox networking] through your /etc/rc.conf to enable a bridged connection.<br />
<br />
==== Using host interface networking with a wireless device ====<br />
Bridging as described above won't work with a wireless device. Using [http://aur.archlinux.org/packages.php?ID=16356 parprouted] however it can be accomplished.<br />
<br />
# Install parprouted and iproute<br />
# <code># ln -s /usr/sbin/ip /sbin/ip</code><br />
# Make sure IP fowarding is enabled: <code># sysctl net.ipv4.ip_forward=1</code>, and/or edit /etc/sysctl.conf<br />
# <code># VBoxTunctl -b -u <user></code>, to create the tap device<br />
# <code># ip link set tap0 up; ip addr add 192.168.0.X/24 dev tap0</code>, needs to be a manually set IP on the same network your wireless device is.<br />
# <code># parprouted wlan0 tap0</code><br />
<br />
=== Getting USB to work in the guest machine ===<br />
(Only available in the PUEL edition)<br />
<br />
First in order to make usb available for use to the virtual machine you must add this line to your /etc/fstab<br />
none /proc/bus/usb usbfs devgid=108,devmode=0664 0 0<br />
<br />
108 is is the id of the group which should be allowed to access USB-devices. Change it to the id of your vboxusers group. You can get the id by running:<br />
$ grep vboxusers /etc/group<br />
<br />
If you don't mind a security hole change devmode from 664 to 666.<br />
<br />
Remount /proc/bus/usb:<br />
# mount -o remount /proc/bus/usb/<br />
<br />
'''Note''' by slipper:<br />
:I had to do ''mount -a'' after the above command to get this to work for me.<br />
<br />
Restart Virtualbox and click the USB tab in the settings of the virtual machine and select which devices are available to your pc on boot. If you wish your virtual machine to use device that you have just plugged in (assuming the virtual machine has booted already), go to the VirtualMachine screen go to devices -> USB Devices -> and select the device you wish to plug in the virtual pc.<br />
<br />
=== Installing Guest Additions ===<br />
For VirtualBox (OSE) version 1.6.2 =>, read:<br><br />
[[VirtualBox-HowTo]]<br />
<br />
The Guest Additions make the shared folders feature available, as well as better video (not 3D) and mouse drivers. You will have mouse integration, thus no need to release the mouse after using it in the guest and one can also enable a bidirectional clipboard.<br />
<br />
After you booted the virtual machine, go to menu Devices->Install Guest Additions... Once you've clicked it, VirtualBox loads an ISO into the current CD-ROM, so you won't see anything happen yet ;)<br />
<br />
Then do the following as root:<br />
# mount /media/cdrom<br />
# sh /media/cdrom/VBoxLinuxAdditions.run<br />
<br />
It will build and install the kernel modules, install the Xorg drivers and create init scripts. It will most probably print out errors about init scripts and run levels and what not. Ignore them. You will find rc.vboxadd in /etc/rc.d which will load them on demand. To have the Guest Additions loaded at boot time, just add those to the DAEMONS array in /etc/rc.conf eg.<br />
<br />
DAEMONS=(syslog-ng network netfs crond alsa '''rc.vboxadd ''')<br />
<br />
Another option is to install one of these packages:<br />
<br />
# pacman -S virtualbox-additions<br />
<br />
or<br />
<br />
# pacman -S virtualbox-ose-additions<br />
<br />
You will then have an ISO to mount as a loop device. Remember to load the loop kernel module before:<br />
<br />
# modprobe loop<br />
# mount /usr/lib/virtualbox/additions/VBoxGuestAdditions.iso /media/cdrom -o loop<br />
<br />
Then execute VBoxLinuxAdditions.run as before. Before adding rc.vboxadd to DAEMONS check /etc/rc.local for commands to load the vboxadd daemons put by the installation script.<br />
<br />
=== Sharing folders between the host and the guest ===<br />
For VirtualBox (OSE) version 1.6.2 =>, read:<br><br />
[[VirtualBox-HowTo]]<br />
<br />
In the settings of the virtual machine go to shared folders tab and add the folders you want to share.<br />
<br />
*NOTE: You need to install Guest Additions in order to use this feature.<br />
In a Linux host, "Devices" --> "Install Guest Additions"<br />
Yes (when asked to download the CD image)<br />
Mount (when asked to register and mount)<br />
<br />
In a Linux host, create one folder for sharing files.<br />
<br />
In a Windows guest, starting with VirtualBox 1.5.0, shared folders are browseable and are therefore visible in Windows Explorer. Open Windows Explorer and look for it under:<br />
<br />
My Networking Places --> Entire Network --> VirtualBox Shared Folders<br />
<br />
Alternatively, on the Windows command line, you can also use the following:<br />
<br />
net use x: \\vboxsvr\sharename<br />
<br />
While vboxsvr is a fixed name, replace "x:" with the drive letter that you want to use for the share, and sharename with the share name specified with VBoxManage.<br />
<br />
In a Linux guest, use the following command:<br />
# mount -t vboxsf [-o OPTIONS] sharename mountpoint<br />
<br />
Replace sharename with the share name specified with VBoxManage, and mountpoint with the path where you want the share to be mounted (e.g. /mnt/share). The usual mount rules apply, that is, create this directory first if it does not exist yet.<br />
<br />
Beyond the standard options supplied by the mount command, the following are available:<br />
iocharset=CHARSET<br />
to set the character set used for I/O operations (utf8 by default) and<br />
convertcp=CHARSET<br />
to specify the character set used for the shared folder name (utf8 by default).<br />
<br />
=== Getting audio to work in the guest machine ===<br />
<br />
In the machine settings, go to the audio tab and select the correct driver according to your sound system (ALSA, OSS or PulseAudio).<br />
<br />
=== Setting up the RAM and Video Memory for the virtual PC ===<br />
<br />
You can change the default values by going to settings -> general.<br />
<br />
=== Setting up CDROM for the Virtual PC ===<br />
<br />
You can change the default values by going to settings -> CD/DVD-ROM.<br />
<br />
Check mount cd/dvd drive and select one of the following options.<br />
<br />
'''Note:''' If no CDROM drive is detected, make sure the HAL daemon is running. To start it, run the following command as root:<br />
<br />
# /etc/rc.d/hal start<br />
<br />
=== Rebuilding vboxdrv module ===<br />
For Virtualbox >= 1.6.0 use [[ABS]] for rebuild vboxdrv module or you can wait the maintainer update it :)<br />
<br />
=== Converting from VMware images ===<br />
Do <br />
# pacman -S qemu<br />
$ qemu-img convert image.vmdk image.bin<br />
$ VBoxManage convertdd image.bin image.vdi<br />
<br />
This may not be needed anymore with recent virtualbox versions (to be confirmed)<br />
<br />
== External Resources ==<br />
* [http://download.virtualbox.org/virtualbox/2.0.6/UserManual.pdf VirtualBox 2.0.6 User Manual]</div>Tinyhttps://wiki.archlinux.org/index.php?title=VirtualBox&diff=57618VirtualBox2009-01-12T12:45:49Z<p>Tiny: /* Using host interface networking (the Arch way) */</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|:VirtualBox}}<br />
{{i18n_entry|Italiano|:VirtualBox (Italiano)}}<br />
{{i18n_entry|简体中文|:VirtualBox (简体中文)}}<br />
{{i18n_entry|Русский|VirtualBox (Русский)}}<br />
{{i18n_entry|Español|VirtualBox (Español)}}<br />
{{i18n_entry|Português|VirtualBox (Português)}}<br />
{{i18n_links_end}}<br />
<br />
<br />
<br />
== What is VirtualBox ==<br />
[http://www.virtualbox.org VirtualBox] is a virtual pc emulator like vmware. It has many of the features vmware has, as well as some of its own.<br />
<br />
=== Editions ===<br />
VirtualBox is available in two editions: VirtualBox (OSE) and VirtualBox (Personal Use and Evaluation License (PUEL))<br />
<br />
==== VirtualBox (OSE) ====<br />
VirtualBox (OSE) is the open source version of VirtualBox, which can be found in the community repository. It lacks some features such as USB device support and the built-in RDP server.<br />
<br />
==== VirtualBox (PUEL) ====<br />
VirtualBox PUEL is a binary-only version (free for personal use) which is available from the [http://aur.archlinux.org/packages.php?ID=9753 AUR] or directly from the <br />
[http://www.virtualbox.org/wiki/Downloads VirtualBox] website. The PUEL edition offers the following advantages:<br />
<br />
*'''Remote Display Protocol (RDP) Server''' - a complete RDP server on top of the virtual hardware, allowing users to connect to a virtual machine remotely using any RDP compatible client<br />
<br />
*'''USB support''' - a virtual USB controller which allows USB 1.1 and USB 2.0 devices to be passed through to virtual machines<br />
<br />
*'''USB over RDP''' - a combination of the RDP server and USB support, allowing users to make USB devices available to virtual machines running remotely<br />
<br />
*'''iSCSI initiator''' - a builtin iSCSI initiator making it possible to use iSCSI targets as virtual disks without the guest requiring support for iSCSI<br />
<br />
== Installation ==<br />
<br />
=== Install VirtualBox (OSE) ===<br />
<br />
VirtualBox (OSE) is available from the standard repositories:<br />
<br />
# pacman -S virtualbox-ose<br />
<br />
'''Note:''' This package seems not to be in x84_64 Repositories.<br />
<br />
This will select by default <tt>virtualbox-ose</tt> and <tt>virtualbox-modules</tt> packages. Once installed, a desktop entry can be located in ''Applications > System Tools > VirtualBox OSE''<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
''('''Note:''' You must logout/login in order for this change to take effect)''<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup. For example:<br />
<br />
MODULES=(loop '''vboxdrv''' fuse ...)<br />
<br />
To load the module manually, run the following in a terminal as root: <br />
<br />
# modprobe vboxdrv<br />
<br />
'''HowTo:'''<br><br />
[[VirtualBox-HowTo]]<br />
<br />
=== Install VirtualBox PUEL (virtualbox_bin) ===<br />
VirtualBox PUEL is available from the [http://aur.archlinux.org/packages.php?ID=9753 AUR: virtualbox_bin].<br />
<br />
Download the tarball from the [http://aur.archlinux.org/packages.php?ID=9753 AUR: virtualbox_bin] page, unpack, run <tt>makepkg</tt>, and then as root:<br />
<br />
# pacman -U PACKAGE-NAME.pkg.tar.gz<br />
<br />
'''However, there's an alternative way to install the virtualbox_bin package:'''<br />
<br />
Firstly, add as root the followings into /etc/pacman.conf:<br />
[archlinuxfr]<br />
Server = <nowiki>http://repo.archlinux.fr/i686</nowiki><br />
or<br />
[archlinuxfr]<br />
Server = <nowiki>http://repo.archlinux.fr/x86_64</nowiki><br />
depending on your CPU's architecture.<br />
<br />
Then you can install it successfully via:<br />
# pacman -Sy virtualbox_bin<br />
<br />
<br />
<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
''('''Note:''' You must logout/login in order for this change to take effect)''<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup. For example:<br />
MODULES=(loop '''vboxdrv''' fuse ...)<br />
<br />
To load the module manually, run the following in a terminal as root: <br />
# modprobe vboxdrv<br />
<br />
==== Module Rebuilds ====<br />
Note that any time your kernel version changes (due to upgrade, recompile, etc.) you must also rebuild the VirtualBox kernel module by running '''vbox_build_module''' as root. This binary will be located in one of the following locations: <tt>/sbin</tt>, <tt>/bin</tt>, or <tt>/usr/bin</tt>. After rebuilding the module, don't forget to load it with: '''<code>modprobe vboxdrv</code>'''<br />
<br />
<br />
<br />
=== Install required QT libraries ===<br />
Currently, VirtualBox relies on qt4 for its graphical interface. If you require a GUI, ensure you have qt4 installed:<br />
# pacman -S qt<br />
<br />
=== Start VirtualBox ===<br />
To start Virtualbox, run the following command in a terminal:<br />
$ VirtualBox<br />
<br />
=== Install VirtualBox 2.1 (another alternative) ===<br />
<br />
VirtualBox.run install can be done using the All Distributions package from the [http://www.virtualbox.org/wiki/Linux_Downloads Linux section] of the VirtualBox Website.<br />
<br />
Make sure the Qt 4.3.0 and SDL 1.2.7 or higher packages are installed:<br />
<br />
# pacman -Sy qt sdl <br />
<br />
Download the appropriate architecture file i386/AMD64. In a terminal window, browse to the download folder and as root run:<br />
<br />
# sh VirtualBox-2.XXXX-Linux_ARCH.run<br />
<br />
This will install the package to the /opt/VirtualBox-2.XXX folder.<br />
<br />
After installation, a desktop entry can be located in ''Applications > System Tools > Sun xVM VirtualBox''<br />
<br />
Now, add the desired username to the '''vboxusers''' group:<br />
# gpasswd -a USERNAME vboxusers<br />
<br />
Lastly, edit <tt>/etc/rc.conf</tt> as root and add '''vboxdrv''' to the MODULES array in order to load the VirtualBox drivers at startup.<br />
<br />
Start the VirtualBox GUI either with the command:<br />
<br />
# VirtualBox <br />
<br />
or using the ''Applications'' desktop entry. In version 2.1.x, an installation wizard should start and take you through the process of setting up a virtual machine. Otherwise, use the help menu to get started. '''Continue reading to see the more advanced options and setups....'''<br />
<br />
== Configuration ==<br />
After we have installed VirtualBox on our system and added ourselves in the vboxusers group we can start configuring our system in order to make all the features of VirtualBox available to us.Create a new virtual machine using the wizard provided by the GUI and then click settings in order to edit the virtual machine settings.<br />
<br />
=== Keyboard and mouse between the host and the guest ===<br />
To capture the keyboard and mouse, click the mouse inside the Virtual Machine display.<br><br />
To uncapture, "Ctrl-Alt Delete".<br />
<br />
If [[Xorg]] freezes mouse and keyboard you will have to disable the [[Xorg#Input_hotplugging_with_xorg-server_1.5| new hot plugging feature of Xorg 1.5]] by adding in /etc/X11/xorg.conf:<br />
<br />
Section "ServerLayout"<br />
. . .<br />
Option "AutoAddDevices" "False"<br />
. . .<br />
EndSection<br />
<br />
This is needed for Linux guests in a Mac OS X or Windows host. TODO: not sure about Linux hosts.<br />
<br />
=== Getting network in the guest machine to work ===<br />
First let's get network working in the guest machine. Click the network tab. The not attached option means you'll have "Network cable unplugged" or similar error in the guest computer.<br />
<br />
==== Using NAT network ====<br />
This is the simplest way to get network. Select NAT network and it should be ready to use. Then, the guest operating system can be automatically configured by using DHCP.<br />
<br />
The NAT IP address on the first card is 10.0.2.0, 10.0.3.0 on the second and so on.<br />
<br />
==== Using host interface networking (the VirtualBox way) ====<br />
Since VirtuaBox 2.1.0 it has a native support for host interface networking. Just add '''vboxnetflt''' to your MODULES section in [[rc.conf]] and choose ''Host Interface Networking'' in the virtual machine configuration.<br />
<br />
==== Using host interface networking (the Arch way) ====<br />
You are going to just edit these files and reboot:<br />
<br />
* /etc/conf.d/bridges<br />
* /etc/rc.conf<br />
* /etc/vbox/interfaces<br />
<br />
Ready? Let's go!<br />
<br />
'''/etc/conf.d/bridges:'''<br />
bridge_br0="eth0 vbox0" # Put any interfaces you need.<br />
BRIDGE_INTERFACES=(br0)<br />
<br />
'''/etc/rc.conf:'''<br />
<br />
First add the bridge module to your MODULES line<br />
MODULES=( <your other modules> '''bridge''')<br />
<br />
Then, in your NETWORKING section, make the following changes:<br />
br0="dhcp" # Maybe you have some static configuration... I use DHCP.<br />
INTERFACES=(eth0 br0)<br />
<br />
'''Note''' by gpan:<br />
<br />
'''/etc/rc.conf:'''<br />
<br />
First add the vboxdrv (and [[vboxnetflt]] in case of 2.1.0 version) module to your MODULES line<br />
<br />
MODULES=( <your other modules> vboxdrv vboxnetflt )<br />
<br />
<br />
Next, you should edit your '''/etc/udev/rules.d/60-vboxdrv.rules''' and type:<br />
<br />
KERNEL=="vboxdrv", NAME="vboxdrv", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
KERNEL=="tun", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
<br />
Save it and exit.<br />
<br />
Then open terminal and type:<br />
<br />
# pacman -S bridge-utils uml_utilities<br />
<br />
Create a new bridge with this command:<br />
<br />
# brctl addbr br0<br />
<br />
<br />
'''/etc/vbox/interfaces'''<br />
<br />
(You can set up more interfaces if you want. Sky is the limit!):<br />
vbox0 your_user br0 # Be sure that your user is in the vboxusers group.<br />
<br />
Reboot and enjoy!<br />
<br />
'''''Note:''' Remember to set up your virtual machine with proper network configuration.''<br />
<br />
'''''Note:''' If you have any issue, make sure that you have the bridge-utils package installed and vboxnet daemon loaded''<br />
<br />
==== Using host interface networking (generic) ====<br />
This way is a bit harder, but it allows you to see the VirtualMachine as a "real" computer on your local network. You need to get bridge-utils <br />
<br />
# pacman -S bridge-utils uml_utilities<br />
<br />
'''Note''' by Sp1d3rmxn:<br />
:You also need to have the TUN module loaded...in [[rc.conf]] add "tun" (without the :quotes) to your MODULES section. For testing this out right now without rebooting :you can load the module from the command line by "modprobe tun".<br />
:<br />
:Then you MUST set these permissions otherwise you'll never get VBox to init the :interface. The command is "<code>chmod 666 /dev/net/tun</code>" (without the quotes).<br />
<br />
:Now proceed with the rest as it's written below.<br />
<br />
'''Note''' by Dharivs<br />
:As said by Sp1d3rmxn, we must set these permissions, but, instead of using the <br />
:command, we can set them in /etc/udev/rules.d/60-vboxdrv.rules, which will set them <br />
:on boot:<br />
KERNEL=="vboxdrv", NAME="vboxdrv", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
KERNEL=="tun", OWNER="root", GROUP="vboxusers", MODE="0660"<br />
<br />
<br />
<b>1.</b> Create a new bridge with this command:<br />
# brctl addbr br0<br />
<br />
<b>2.</b> If you are not using DHCP, run ifconfig and note down the network configuration of your existing network interface (e.g. eth0), which we will need to copy to the bridge in a minute.<br />
<br />
''('''Note: You will need this settings so make sure you don't lose them!!!''')''<br />
<br />
<b>3.</b> Switch your physical network adapter to "promiscuous" mode so that it will accept Ethernet frames for MAC addresses other than its own (replace eth0 with your network interface):<br />
# ifconfig eth0 0.0.0.0 promisc <br />
<br />
''('''Note:''' You will lose network connectivity on eth0 at this point.)''<br />
<br />
<b>4.</b> Add your network adapter to the bridge:<br />
# brctl addif br0 eth0<br />
<br />
<b>5.</b> Transfer the network configuration previously used with your physical ethernet adapter to the new bridge. If you are using DHCP, this should work:<br />
# dhclient br0<br />
<br />
'''Note''' by Sp1d3rmxn:<br />
:Use "dhcpcd -t 30 -h yourhostname br0 &" instead of the above<br />
<br />
Otherwise, run <code>ifconfig br0 x.x.x.x netmask x.x.x.x</code> and use the values that you noted down previously.<br />
<br />
<b>6.</b> To create a permanent host interface called vbox0 (all host interfaces created in this way must be called vbox followed by a number) and add it to the network bridge created above, use the following command:<br />
VBoxAddIF vbox0 vboxuser br0<br />
<br />
Replace vboxuser with the name of the user who is supposed to be able to use the new interface.<br />
<br />
('''Note:''' VboxAddIF is located in /opt/VirtualBox-VERSION OF VIRTUALBOX/VBoxAddIF)<br />
<br />
Alternatively, you can [http://mychael.gotdns.com/blog/2007/05/31/virtualbox-bridging/ setup VirtualBox networking] through your /etc/rc.conf to enable a bridged connection.<br />
<br />
==== Using host interface networking with a wireless device ====<br />
Bridging as described above won't work with a wireless device. Using [http://aur.archlinux.org/packages.php?ID=16356 parprouted] however it can be accomplished.<br />
<br />
# Install parprouted and iproute<br />
# <code># ln -s /usr/sbin/ip /sbin/ip</code><br />
# Make sure IP fowarding is enabled: <code># sysctl net.ipv4.ip_forward=1</code>, and/or edit /etc/sysctl.conf<br />
# <code># VBoxTunctl -b -u <user></code>, to create the tap device<br />
# <code># ip link set tap0 up; ip addr add 192.168.0.X/24 dev tap0</code>, needs to be a manually set IP on the same network your wireless device is.<br />
# <code># parprouted wlan0 tap0</code><br />
<br />
=== Getting USB to work in the guest machine ===<br />
(Only available in the PUEL edition)<br />
<br />
First in order to make usb available for use to the virtual machine you must add this line to your /etc/fstab<br />
none /proc/bus/usb usbfs devgid=108,devmode=0664 0 0<br />
<br />
108 is is the id of the group which should be allowed to access USB-devices. Change it to the id of your vboxusers group. You can get the id by running:<br />
$ grep vboxusers /etc/group<br />
<br />
If you don't mind a security hole change devmode from 664 to 666.<br />
<br />
Remount /proc/bus/usb:<br />
# mount -o remount /proc/bus/usb/<br />
<br />
'''Note''' by slipper:<br />
:I had to do ''mount -a'' after the above command to get this to work for me.<br />
<br />
Restart Virtualbox and click the USB tab in the settings of the virtual machine and select which devices are available to your pc on boot. If you wish your virtual machine to use device that you have just plugged in (assuming the virtual machine has booted already), go to the VirtualMachine screen go to devices -> USB Devices -> and select the device you wish to plug in the virtual pc.<br />
<br />
=== Installing Guest Additions ===<br />
For VirtualBox (OSE) version 1.6.2 =>, read:<br><br />
[[VirtualBox-HowTo]]<br />
<br />
The Guest Additions make the shared folders feature available, as well as better video (not 3D) and mouse drivers. You will have mouse integration, thus no need to release the mouse after using it in the guest and one can also enable a bidirectional clipboard.<br />
<br />
After you booted the virtual machine, go to menu Devices->Install Guest Additions... Once you've clicked it, VirtualBox loads an ISO into the current CD-ROM, so you won't see anything happen yet ;)<br />
<br />
Then do the following as root:<br />
# mount /media/cdrom<br />
# sh /media/cdrom/VBoxLinuxAdditions.run<br />
<br />
It will build and install the kernel modules, install the Xorg drivers and create init scripts. It will most probably print out errors about init scripts and run levels and what not. Ignore them. You will find rc.vboxadd in /etc/rc.d which will load them on demand. To have the Guest Additions loaded at boot time, just add those to the DAEMONS array in /etc/rc.conf eg.<br />
<br />
DAEMONS=(syslog-ng network netfs crond alsa '''rc.vboxadd ''')<br />
<br />
Another option is to install one of these packages:<br />
<br />
# pacman -S virtualbox-additions<br />
<br />
or<br />
<br />
# pacman -S virtualbox-ose-additions<br />
<br />
You will then have an ISO to mount as a loop device. Remember to load the loop kernel module before:<br />
<br />
# modprobe loop<br />
# mount /usr/lib/virtualbox/additions/VBoxGuestAdditions.iso /media/cdrom -o loop<br />
<br />
Then execute VBoxLinuxAdditions.run as before. Before adding rc.vboxadd to DAEMONS check /etc/rc.local for commands to load the vboxadd daemons put by the installation script.<br />
<br />
=== Sharing folders between the host and the guest ===<br />
For VirtualBox (OSE) version 1.6.2 =>, read:<br><br />
[[VirtualBox-HowTo]]<br />
<br />
In the settings of the virtual machine go to shared folders tab and add the folders you want to share.<br />
<br />
*NOTE: You need to install Guest Additions in order to use this feature.<br />
In a Linux host, "Devices" --> "Install Guest Additions"<br />
Yes (when asked to download the CD image)<br />
Mount (when asked to register and mount)<br />
<br />
In a Linux host, create one folder for sharing files.<br />
<br />
In a Windows guest, starting with VirtualBox 1.5.0, shared folders are browseable and are therefore visible in Windows Explorer. Open Windows Explorer and look for it under:<br />
<br />
My Networking Places --> Entire Network --> VirtualBox Shared Folders<br />
<br />
Alternatively, on the Windows command line, you can also use the following:<br />
<br />
net use x: \\vboxsvr\sharename<br />
<br />
While vboxsvr is a fixed name, replace "x:" with the drive letter that you want to use for the share, and sharename with the share name specified with VBoxManage.<br />
<br />
In a Linux guest, use the following command:<br />
# mount -t vboxsf [-o OPTIONS] sharename mountpoint<br />
<br />
Replace sharename with the share name specified with VBoxManage, and mountpoint with the path where you want the share to be mounted (e.g. /mnt/share). The usual mount rules apply, that is, create this directory first if it does not exist yet.<br />
<br />
Beyond the standard options supplied by the mount command, the following are available:<br />
iocharset=CHARSET<br />
to set the character set used for I/O operations (utf8 by default) and<br />
convertcp=CHARSET<br />
to specify the character set used for the shared folder name (utf8 by default).<br />
<br />
=== Getting audio to work in the guest machine ===<br />
<br />
In the machine settings, go to the audio tab and select the correct driver according to your sound system (ALSA, OSS or PulseAudio).<br />
<br />
=== Setting up the RAM and Video Memory for the virtual PC ===<br />
<br />
You can change the default values by going to settings -> general.<br />
<br />
=== Setting up CDROM for the Virtual PC ===<br />
<br />
You can change the default values by going to settings -> CD/DVD-ROM.<br />
<br />
Check mount cd/dvd drive and select one of the following options.<br />
<br />
'''Note:''' If no CDROM drive is detected, make sure the HAL daemon is running. To start it, run the following command as root:<br />
<br />
# /etc/rc.d/hal start<br />
<br />
=== Rebuilding vboxdrv module ===<br />
For Virtualbox >= 1.6.0 use [[ABS]] for rebuild vboxdrv module or you can wait the maintainer update it :)<br />
<br />
=== Converting from VMware images ===<br />
Do <br />
# pacman -S qemu<br />
$ qemu-img convert image.vmdk image.bin<br />
$ VBoxManage convertdd image.bin image.vdi<br />
<br />
This may not be needed anymore with recent virtualbox versions (to be confirmed)<br />
<br />
== External Resources ==<br />
* [http://download.virtualbox.org/virtualbox/2.0.6/UserManual.pdf VirtualBox 2.0.6 User Manual]</div>Tinyhttps://wiki.archlinux.org/index.php?title=Bluetooth_mouse&diff=54114Bluetooth mouse2008-11-24T13:40:56Z<p>Tiny: </p>
<hr />
<div>[[Category:Input devices (English)]]<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|Bluetooth Mouse}}<br />
{{i18n_entry|Czech|Bluetooth myš}}<br />
{{i18n_entry|Русский|Bluetooth-мышь}}<br />
{{i18n_links_end}}This article describes how to set up a bluetooth mouse with Arch Linux. I used a Logitech v270 with a Trendnet TBW-101UB USB Bluetooth dongle, but the general process should be the same for any model.<br />
<br />
<br />
<br />
== Required software ==<br />
<br />
You need the '''bluez-utils''' and '''bluez-libs''' packages from the extra repository. Looks like you also need '''dbus''' for automating things, otherwise hcid reports errors such as: "hcid[14851]: Unable to get on D-Bus". Enabling D-Bus also solved problems for local bluetooth device recognition.<br />
<br />
== Configuration ==<br />
The pertinent options in /etc/conf.d/bluetooth are <br />
HIDD_ENABLE=true<br />
after that, start bluetooth services with<br />
/etc/rc.d/bluetooth start<br />
<br />
== Finding out your mouse's bdaddr ==<br />
<br />
It is of the form ''12:34:56:78:9A:BC''. Either find it in the documentation of your mouse, on the mouse itself or with the '''hcitool scan''' command.<br />
<br />
== kernel modules ==<br />
<br />
The command<br />
# modprobe -v hci_usb bluetooth hidp l2cap<br />
loads the kernel modules you need, if they weren't loaded automatically. <br />
<br />
(See below for some tips if you're stuck at this point)<br />
<br />
== Connecting the mouse ==<br />
hidd --search<br />
hcitool inq<br />
are good for device scanning.<br />
hidd --connect <bdaddr><br />
to actually connect.<br />
hidd --show<br />
will show your currently connected devices. The mouse should show up in this list. If it doesn't, press the reset button to make it discoverable.<br />
<br />
Note: If you have the ipw3945 module loaded (wifi on HP computer) the bluetooth wont work.<br />
<br />
== Connecting the mouse at startup ==<br />
Edit /etc/conf.d/bluetooth:<br />
# Arguments to hidd<br />
HIDD_OPTIONS="--connect <enter here your bluetooth mouse address>"<br />
<br />
and test the new settings:<br />
/etc/rc.d/bluetooth stop<br />
hidd --killall (drop mouse connection)<br />
/etc/rc.d/bluetooth start<br />
<br />
Note: The above instructions to start the mouse at startup don't work with the now outdated 3.11 bluetooth packages. New versions such as the current (3.32) packages are not affected. If you are using an older version, then to start the mouse at startup, add:<br />
hidd --connect <enter here your bluetooth mouse address (No capitals!!!)><br />
to your /etc/rc.local file.<br />
<br />
Note #2: You can connect any bluetooth mouse and/or keyboard without any further configuration and without knowing the device address. You can do it by adding the --master and/or --server option in HIDD_OPTIONS depending on your device.<br />
<br />
== Troubleshooting tips ==<br />
<br />
If you have trouble with your USB dongle, you may also want to try<br />
# modprobe -v rfcomm<br />
<br />
At this point, you should get an hci0 device with<br />
# hcitool dev<br />
<br />
Sometimes the device is not active right away - try starting the interface with<br />
# hciconfig hci0 up<br />
and searching for devices as shown above.</div>Tinyhttps://wiki.archlinux.org/index.php?title=Bluetooth_mouse&diff=54113Bluetooth mouse2008-11-24T13:39:51Z<p>Tiny: </p>
<hr />
<div>[[Category:Input devices (English)]]<br />
{{i18n_links_start}}<br />
{{i18n_entry|English|Bluetooth Mouse}}<br />
{{i18n_entry|Czech|Bluetooth myš}}<br />
{{i18n_entry|Русский|Bluetooth-мышь}}<br />
{{i18n_links_end}}This article describes how to set up a bluetooth mouse with Arch Linux. I used a Logitech v270 with a Trendnet TBW-101UB USB Bluetooth dongle, but the general process should be the same for any model.<br />
<br />
<br />
<br />
== Required software ==<br />
<br />
You need the '''bluez-utils''' and '''bluez-libs''' packages from the extra repository. Looks like you also need '''D-Bus''' for automating things, otherwise hcid reports errors such as: "hcid[14851]: Unable to get on D-Bus". Enabling D-Bus also solved problems for local bluetooth device recognition.<br />
<br />
== Configuration ==<br />
The pertinent options in /etc/conf.d/bluetooth are <br />
HIDD_ENABLE=true<br />
after that, start bluetooth services with<br />
/etc/rc.d/bluetooth start<br />
<br />
== Finding out your mouse's bdaddr ==<br />
<br />
It is of the form ''12:34:56:78:9A:BC''. Either find it in the documentation of your mouse, on the mouse itself or with the '''hcitool scan''' command.<br />
<br />
== kernel modules ==<br />
<br />
The command<br />
# modprobe -v hci_usb bluetooth hidp l2cap<br />
loads the kernel modules you need, if they weren't loaded automatically. <br />
<br />
(See below for some tips if you're stuck at this point)<br />
<br />
== Connecting the mouse ==<br />
hidd --search<br />
hcitool inq<br />
are good for device scanning.<br />
hidd --connect <bdaddr><br />
to actually connect.<br />
hidd --show<br />
will show your currently connected devices. The mouse should show up in this list. If it doesn't, press the reset button to make it discoverable.<br />
<br />
Note: If you have the ipw3945 module loaded (wifi on HP computer) the bluetooth wont work.<br />
<br />
== Connecting the mouse at startup ==<br />
Edit /etc/conf.d/bluetooth:<br />
# Arguments to hidd<br />
HIDD_OPTIONS="--connect <enter here your bluetooth mouse address>"<br />
<br />
and test the new settings:<br />
/etc/rc.d/bluetooth stop<br />
hidd --killall (drop mouse connection)<br />
/etc/rc.d/bluetooth start<br />
<br />
Note: The above instructions to start the mouse at startup don't work with the now outdated 3.11 bluetooth packages. New versions such as the current (3.32) packages are not affected. If you are using an older version, then to start the mouse at startup, add:<br />
hidd --connect <enter here your bluetooth mouse address (No capitals!!!)><br />
to your /etc/rc.local file.<br />
<br />
Note #2: You can connect any bluetooth mouse and/or keyboard without any further configuration and without knowing the device address. You can do it by adding the --master and/or --server option in HIDD_OPTIONS depending on your device.<br />
<br />
== Troubleshooting tips ==<br />
<br />
If you have trouble with your USB dongle, you may also want to try<br />
# modprobe -v rfcomm<br />
<br />
At this point, you should get an hci0 device with<br />
# hcitool dev<br />
<br />
Sometimes the device is not active right away - try starting the interface with<br />
# hciconfig hci0 up<br />
and searching for devices as shown above.</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53074Installing with Software RAID or LVM2008-11-10T14:01:42Z<p>Tiny: /* Partition the Hard Drives */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
If your hard drives are already prepared and all you want to do is activate RAID and LVM jump to [[Installing_with_Software_RAID_or_LVM#Activate_existing_RAID_devices_and_LVM_volumes|Activate exsiting RAID devices and LVM volumes]].<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.<br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Activate existing RAID devices and LVM volumes ===<br />
<br />
If you already have RAID partitions created on your system and you've also set up LVM and all you want is enabling them<br />
follow this simple procedure. ''This might come in handy if you're switching distros and don't want to lose data in /home<br />
for example.'' <br />
<br />
First you need to enable RAID support. RAID1 in this case.<br />
<pre><br />
modprobe raid1<br />
</pre><br />
<br />
Activate RAID devices. I have md0 for /boot and md1 for LVM where two logical volumes will reside.<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1<br />
mdadm --assemble /dev/md1 /dev/sda3 /dev/sdb3<br />
</pre><br />
<br />
RAID devices should now be enabled. Check /proc/mdstat.<br />
<br />
If you haven't loaded kernel LVM support do so now.<br />
<pre><br />
modprobe dm-mod<br />
</pre><br />
Startup of LVM requires just the following two commands: <br />
<pre><br />
vgscan<br />
vgchange -ay<br />
</pre><br />
<br />
You can now jump to '''[3] Set Filesystem Mountpoints''' in your menu based setup and mount created<br />
partitions as needed.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53071Installing with Software RAID or LVM2008-11-10T13:50:34Z<p>Tiny: /* Activate existing RAID devices and LVM volumes */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.<br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Activate existing RAID devices and LVM volumes ===<br />
<br />
If you already have RAID partitions created on your system and you've also set up LVM and all you want is enabling them<br />
follow this simple procedure. ''This might come in handy if you're switching distros and don't want to lose data in /home<br />
for example.'' <br />
<br />
First you need to enable RAID support. RAID1 in this case.<br />
<pre><br />
modprobe raid1<br />
</pre><br />
<br />
Activate RAID devices. I have md0 for /boot and md1 for LVM where two logical volumes will reside.<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1<br />
mdadm --assemble /dev/md1 /dev/sda3 /dev/sdb3<br />
</pre><br />
<br />
RAID devices should now be enabled. Check /proc/mdstat.<br />
<br />
If you haven't loaded kernel LVM support do so now.<br />
<pre><br />
modprobe dm-mod<br />
</pre><br />
Startup of LVM requires just the following two commands: <br />
<pre><br />
vgscan<br />
vgchange -ay<br />
</pre><br />
<br />
You can now jump to '''[3] Set Filesystem Mountpoints''' in your menu based setup and mount created<br />
partitions as needed.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53068Installing with Software RAID or LVM2008-11-10T13:48:40Z<p>Tiny: /* Procedure */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.<br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Activate existing RAID devices and LVM volumes ===<br />
<br />
If you already have RAID partitions created on your system and you've also set up LVM and all you want is enabling them<br />
follow this simple procedure. ''This might come in handy if you're switching distros and don't want to lose data in /home<br />
for example.'' <br />
<br />
First you need to enable RAID support. RAID1 in this case.<br />
<pre><br />
modprobe raid1<br />
</pre><br />
<br />
Activate RAID devices. I have md0 for /boot and md1 for LVM where two logical volumes will reside.<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda1 /dev/sdb1<br />
mdadm --assemble /dev/md1 /dev/sda3 /dev/sdb3<br />
</pre><br />
<br />
RAID devices should now be enabled. Check /proc/mdstat.<br />
<br />
Startup of LVM requires just the following two commands: <br />
<pre><br />
vgscan<br />
vgchange -ay<br />
</pre><br />
<br />
You can now jump to '''[3] Set Filesystem Mountpoints''' in your menu based setup and mount created<br />
partitions as needed.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53059Installing with Software RAID or LVM2008-11-10T11:52:33Z<p>Tiny: /* Setup LVM and Create the / LVM Volume */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.<br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53058Installing with Software RAID or LVM2008-11-10T11:52:19Z<p>Tiny: /* Setup LVM and Create the / LVM Volume */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
This might fail if you're using raid or creating PV on an existing Volume Group. If so you might want to add -ff option.<br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tinyhttps://wiki.archlinux.org/index.php?title=Installing_with_Software_RAID_or_LVM&diff=53057Installing with Software RAID or LVM2008-11-10T11:50:19Z<p>Tiny: /* Setup LVM and Create the / LVM Volume */</p>
<hr />
<div>[[Category:Getting and installing Arch (English)]]<br />
[[Category:Storage (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
== Disclaimer==<br />
<br />
Installing a system with RAID is a complex process. Anything could go wrong. You could make a mistake, I could make a mistake, there could be a bug in something. Backup all your data first. Make sure only the drives involved in the installation are attached while doing the install. You've been warned!<br />
<br />
Also note that this document is up-to-date with all "Archisms" as of 2008.06 'Overlord'. It may not be applicable to previous releases of Arch Linux.<br />
<br />
=== RAID ===<br />
<br />
RAID (Redundant Array of Independent Disks) is designed to prevent data loss in the event of a hard disk failure. There are different "levels" of RAID. RAID 0 (striping) isn't really RAID at all, because it provides no redundancy. It does, however, provide speed benefit. We'll use RAID 0 for swap, on the assumption that you're using a desktop, where the speed increase is worth the possibility of having your system crash if one of your drives fails. On a server, you'd almost certainly want RAID 1 or RAID 5. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions.<br />
<br />
RAID 1 is the most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. We'll be using RAID 1 for everything except swap. Note that RAID 1 is the only option for the boot partition, because bootloaders (which read the boot partition) don't understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition.<br />
<br />
RAID 5 is the only other RAID level you're likely to want. It requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.<br />
<br />
'''ATTENTION: Having RAID does not mean you don't need backups - read the CAVEATS section below!'''<br />
<br />
=== LVM ===<br />
<br />
[http://sourceware.org/lvm2/ LVM] (Logical Volume Management) makes use of the [http://sources.redhat.com/dm/ device-mapper] feature of the Linux kernel. It provides a system of specifying partitions independently of the layout of the underlying disk. What this means for you is that you can extend and shrink partitions (subject to the filesystem you use allowing this) and add and remove partitions without worrying about whether you have enough contiguous space on a particular disk, without getting caught up in the problems of fdisking a disk that is in use (and wondering whether the kernel is using the old or new partition table) and without having to move other partition out of the way.<br />
<br />
This is strictly an ease-of-management issue: it doesn't provide any addition security. However, it sits nicely with the other two technologies we're using.<br />
<br />
Note that we're not using LVM for the boot partition (because of the bootloader problem).<br />
<br />
==CAVEATS==<br />
<br />
=== Security (redundancy) ===<br />
<br />
Again, RAID does not provide a guarantee that your data is safe. If there is a fire, if your computer is stolen or if you have multiple hard drive failures, RAID won't protect you. So '''make backups'''. Whether you use tape drives, DVDs, CDROMs or another computer, keep a copy of your data out of your computer (and preferably offsite) and keep it up to date. Get into the habit of making regular backups. If you organize the data on your computer in a way that separates things you are currently working on from "archived" things that are unlikely to change, you can back up the "current" stuff frequently, and the "archived" stuff occasionally.<br />
<br />
== General Approach==<br />
<br />
For starters, note that this document seeks primarily to give you a good example walkthrough of how to install Arch with Software RAID or LVM support for a typical case. It won't try to explain all the possible things you can do -- it's more to give you an example of something that will work that you can then tweak to your own purposes.<br />
<br />
In this example, the machine I'm using will have three similar IDE hard drives, at least 80GB each in size, installed as primary master, primary slave, and secondary master, with my installation CD-ROM drive as the secondary slave. I will assume these can be reached as /dev/hda, /dev/hdb, and /dev/hdc, and that the cdrom drive is /dev/cdrom.<br />
<br />
We'll create a 100MB /boot partition, a 2048MB (2GB) swap partition and a ~ 78GB root partition using LVM. The boot and swap partitions will be RAID1, while the root partition will be RAID5. Why RAID1? For boot, it's so you can boot the kernel from grub (which has no RAID drivers!), and for swap, it's for speed.<br />
<br />
Each RAID1 redundant partition will have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the size of a single one of these physical partitions. A RAID1 redundant partition with 3 physical partitions can lose any two of its physical partitions and still function.<br />
<br />
Each RAID5 redundant partition will also have three physical partitions, all the same size, one on each of the drives. The total storage capacity will be the combined size of ''two'' of these physical partitions, with the third drive being consumed to provide parity information. A RAID5 redundant partition with 3 physical partitions can lose any one of its physical partitions and still function.<br />
<br />
== Get the Arch Installer CD ==<br />
<br />
Please note that in order to use LVM, you need the <code>lvm2</code> and <code>dev-mapper</code> packages installed, otherwise you won't be able to see any LVM partitions on reboot, until you install those packages. Note that the Arch 0.7.1 Base installer ''does not'' contain these packages, but the Arch 0.7.1 Full installer does. So if you're going to use LVM, you'll need to download the bigger ISO. My example will describe you using the Full installer; the changes should be minimal if you wish to use the Base installer instead.<br />
<br />
== Outline ==<br />
<br />
Just to give you an idea of how all this will work, I'll outline the steps. The details for these will be filled in below.<br />
<br />
# Boot the Installer CD<br />
# Partition the Hard Drives<br />
# Create the RAID Redundant Partitions<br />
# Create and Mount the Main Filesystems<br />
# Setup LVM and Create the / LVM Volume<br />
# Install and Configure Arch<br />
# Install Grub on the Primary Hard Drive<br />
# Unmount Filesystems and Reboot<br />
# Install Grub on the Alternate Boot Drives<br />
# Archive your Filesystem Partition Scheme<br />
<br />
== Procedure==<br />
<br />
=== Boot the Installer CD===<br />
<br />
First, load all your drives in the machine. Then boot the Arch Linux 0.7 ''Full'' installation CD.<br />
<br />
At the syslinux boot prompt, hit enter: we want to use the SCSI kernel, which has support for RAID and LVM built in.<br />
<br />
So far, this is easy. Don't worry, it gets harder.<br />
<br />
=== Partition the Hard Drives===<br />
<br />
We'll use <code>cfdisk</code> to do this partitioning. We want to create 4 partitions on each of the three drive:<br />
<br />
Partition 1 (/boot): 100MB, type FD, bootable<br><br />
Partition 2 (swap): 2048MB, type FD<br><br />
Partition 3 (LVM): <Rest of the drive>, type FD<br />
<br />
Note that in general, in <code>cfdisk</code>, you can use the first letter of each <code>[[Bracketed Option]]</code> to select it; however, this is not true for the <code>[[Write]]</code> command, you have to hold SHIFT as well to select it.<br />
<br />
First run:<br />
<pre><br />
# cfdisk /dev/hda<br />
</pre><br />
<br />
Create each partition in order:<br />
<br />
# Select <code>'''New'''</code>.<br />
# Hit Enter to make it a <code>'''Primary'''</code> partition.<br />
# Type the appropriate size (in MB), or for Partition 3, just hit enter to select the remainder of the drive.<br />
# Hit Enter to choose to place the partition at the <code>'''Beginning'''</code>.<br />
# Select <code>'''Type'''</code>, hit enter to see the second page of the list, and then type <code>fd</code> for the Linux RAID Autodetect type.<br />
# ''For Partition 1 on each drive'', select <code>'''Bootable'''</code>.<br />
# Hit down arrow (selecting the remaining freespace) to go on to the next partition to be created.<br />
<br />
When you're done, select <code>'''Write'''</code>, and confirm <code>y-e-s</code> that you want to write the partition information to disk.<br />
<br />
Then select <code>'''Quit'''</code>.<br />
<br />
Repeat this for the other two drives:<br />
<br />
<pre><br />
# cfdisk /dev/hdb<br />
# cfdisk /dev/hdc<br />
</pre><br />
<br />
Create the same exact partitions on each disk. If a group of partitions of different sizes are assembled to create a redundant RAID partition, it ''will'' work, but the redundant partition will be in multiples of the size of the smallest one, leaving the rest of the allocated drive space to waste.<br />
<br />
=== Load the RAID Modules ===<br />
<br />
Before using <code>mdadm</code>, you need load the modules for the RAID levels you'll be using. In this example, we're using levels 1 and 5, so we'll load those. You can ignore any modprobe errors like <code>"cannot insert md-mod.ko: File exists"</code>. Busybox's modprobe can be a little slow sometimes.<br />
<br />
<pre><br />
# modprobe raid1<br />
# modprobe raid5<br />
</pre><br />
<br />
=== Create the RAID Redundant Partitions ===<br />
<br />
Now that you've created all the physical partitions, you're ready to set up RAID. The tool you use to create RAID arrays is <code>mdadm</code>.<br />
<br />
To create /dev/md0 (/):<br />
<pre><br />
# mdadm --create /dev/md0 --level=5 --raid-devices=3 /dev/hda3 /dev/hdb3 /dev/hdc3<br />
</pre><br />
<br />
To create /dev/md1 (/boot):<br />
<pre><br />
# mdadm --create /dev/md1 --level=1 --raid-devices=3 /dev/hda1 /dev/hdb1 /dev/hdc1<br />
</pre><br />
<br />
To create /dev/md2 (swap):<br />
<pre><br />
# mdadm --create /dev/md2 --level=1 --raid-devices=3 /dev/hda2 /dev/hdb2 /dev/hdc2<br />
</pre><br />
<br />
At this point, you should have working RAID partitions. When you create the RAID partitions, they need to sync themselves so the contents of all three physical partitions are the same on all three drives. The hard drives lights will come on as they try to sync up. You can monitor the progress by typing:<br />
<pre><br />
# cat /proc/mdstat<br />
</pre><br />
<br />
You can also get particular information about, say, the root partition by typing:<br />
<pre><br />
# mdadm --misc --detail /dev/md0<br />
</pre><br />
<br />
You don't have to wait for synchronization to finish -- you may proceed with the installation while syncronization is still occurring. You can even reboot at the end of the installation with synchronization still going.<br />
<br />
=== Setup LVM and Create the / LVM Volume===<br />
<br />
If you are using an Arch Linux install CD <= 0.7.1, you have to create and mount a sysfs partition on /sys, to keep lvm from getting cranky. If you forget to do this, instead of giving you an intelligent error message, lvm will simply Segmentation fault at various inconvenient times. ''Be warned!''<br />
<br />
To mount the sysfs partition, do:<br />
<pre><br />
# mkdir /sys<br />
# mount -t sysfs none /sys<br />
</pre><br />
<br />
Make sure that the device-mapper module is loaded:<br />
<pre><br />
# modprobe dm-mod<br />
</pre><br />
<br />
Now you need to do is tell LVM you have a Physical Volume for it to use. It's really a virtual RAID volume (<code>/dev/md3</code>), but LVM doesn't know this, or really care. Do:<br />
<pre><br />
# lvm pvcreate /dev/md0<br />
</pre><br />
<br />
LVM should report back that it has added the Physical Volume. You can confirm this with:<br />
<pre><br />
# lvm pvscan<br />
</pre><br />
<br />
If you're creating a PV on an existing volume group you might want to add -ff option.<br />
<br />
Now it's time to create a Volume Group (which I'll call <code>array</code>) which has control over the LVM Physical Volume we created. Do:<br />
<pre><br />
# lvm vgcreate array /dev/md0<br />
</pre><br />
<br />
LVM should report that it has created the Volume Group <code>array</code>. You can confirm this with:<br />
<pre><br />
# lvm vgscan<br />
</pre><br />
<br />
Next, we create a Logical Volume called <code>root</code> in Volume Group <code>array</code> which is 50GB in size:<br />
<pre><br />
# lvm lvcreate --size 50G --name root array<br />
</pre><br />
<br />
LVM should report that it created the Logical Volume <code>root</code>. You can confirm this with:<br />
<pre><br />
# lvm lvscan<br />
</pre><br />
<br />
The lvm volume should now be available as <code>/dev/array/root</code>.<br />
<br />
=== Create and Mount the Filesystems ===<br />
'''When you are using a setup that is newer then 2008.03; this step is optional!'''<br />
<br />
I like Reiser (3.x), so I use it for almost everything. GRUB supports it for booting, and it handles small files well. It's about as well tested as EXT3. You can choose other types if you wish.<br />
<br />
To create /boot:<br />
<pre><br />
# mkreiserfs /dev/md1<br />
</pre><br />
<br />
To create swap space:<br />
<pre><br />
# mkswap /dev/md2<br />
</pre><br />
<br />
To create /:<br />
<pre><br />
# mkreiserfs /dev/array/root<br />
</pre><br />
<br />
Now, mount the boot and root partitions where the installer expects them:<br />
<pre><br />
# mount /dev/array/root /mnt<br />
# mkdir /mnt/boot<br />
# mount /dev/md1 /mnt/boot<br />
</pre><br />
<br />
We've created all our filesystems! And we're ready to install the OS!<br />
<br />
=== Install and Configure Arch ===<br />
<br />
This section doesn't attempt to teach you all about the Arch Installer. It leaves out some details here and there for brevity, but still seeks to be basically follow-able. If you're having trouble with the installer, you may wish to seek help elsewhere in the Wiki or forums.<br />
<br />
Here's the walkthrough:<br />
<br />
* Type <code>/arch/setup</code> to launch the main installer.<br />
* Select <code> < OK ></code> at the opening screen.<br />
* Select <code>1 CD_ROM</code> to install from CD-ROM (or <code>2 FTP</code> if you have a local Arch mirror on FTP).<br />
* If you have skipped the optional step (''Create and Mount the Filesystems'') above, and haven't created a fileystem yet, select <code>1 Prepare Hard Drive</code> > <code>3 Set Filesystem Mountpoints</code> and create your filesystems and mountpoints here'''<br />
* Now at the main menu, Select <code>2 Select Packages</code> and select all the packages in the ''base'' category, as well as the <code>mdadm</code> and <code>lvm2</code> packages from the ''system'' category. Note: mdadm & lvm2 are included in ''base'' category since arch-base-0.7.2.<br />
* Select <code>3 Install Packages</code>. This will take a little while.<br />
* Select <code>4 Configure System</code>:<br />
<br />
Add the ''raid'' and ''lvm2'' hook to the HOOKS list in /etc/mkinitcpio.conf (before 'filesystems', NOT after).<br />
See [[Configuring_mkinitcpio#Using_raid|Configuring mkinitpcio using RAID]] for more details. <br />
<br />
Edit your <code>/etc/rc.conf</code>. It should contain a <code>USELVM</code> entry already, which you should change to:<br />
<pre><br />
USELVM="yes"<br />
</pre><br />
''Please Note'': The <code>rc.sysinit</code> script that parses the <code>USELVM</code> variable entry will accept either <code>yes</code> or <code>YES</code>, however it will not accept mixed case. Please be sure you've got your capitalization correct.<br />
<br />
Edit your <code>/etc/fstab</code> to contain the entries:<br />
<pre><br />
/dev/array/root / reiserfs defaults 0 1<br />
/dev/md2 swap swap defaults 0 0<br />
/dev/md1 /boot reiserfs defaults 0 0<br />
</pre><br />
<br />
At this point, make any other configuration changes you need to other files.<br />
<br />
Then exit the configuration menu.<br />
<br />
Since you will not be installing Grub from the installer, select <code>7 Exit Install</code> to leave the installer program.<br />
<br />
Then specify the raid array you're booting from in /mnt/boot/grub/menu.lst like:<br />
# Example with /dev/array/root for ''/'' & /dev/md1 for ''/boot'':<br />
kernel /kernel26 root=/dev/array/root ro md=1,/dev/hda1,/dev/hdb1 md=0,/dev/hda3,/dev/hdb3<br />
<br />
=== Install Grub on the Primary Hard Drive (and save the RAID config) ===<br />
<br />
This is the last and final step before you have a bootable system!<br />
<br />
As an overview, the basic concept is to copy over the grub bootloader files into /boot/grub, mount a procfs and a device tree inside of /mnt, then chroot to /mnt so you're effectively inside your new system. Once in your new system, you will run grub to install the bootloader in the boot area of your first hard drive. Then we save our new RAID configuration in <tt>/etc/mdadm.conf</tt> so it can be re-assembled automatically after we reboot.<br />
<br />
Copy the GRUB files into place and get into our chroot:<br />
<pre><br />
# cp -a /mnt/usr/lib/grub/i386-pc/* /mnt/boot/grub<br />
# sync<br />
# mount -o bind /dev /mnt/dev<br />
# mount -t proc none /mnt/proc<br />
# chroot /mnt /bin/bash<br />
</pre><br />
<br />
At this point, you may no longer be able to see keys you type at your console. I'm not sure of the reason for this (NOTE: try "chroot /mnt /bin/<shell>"), but it you can fix it by typing <code>reset</code> at the prompt.<br />
<br />
Once you've got console echo back on, type:<br />
<pre><br />
# grub<br />
</pre><br />
<br />
After a short wait while grub does some looking around, it should come back with a grub prompt. Do:<br />
<br />
<pre><br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
Now you need to save our RAID configuration so it can be re-assembled automatically each time we boot. Previously, this was an unnecessary step in Arch because the RAID drivers were built in to the kernel. But when loaded after the kernel boots (as modules), arrays are not autodetected. Hence this configuration file.<br />
<br />
The default <code>/etc/mdadm.conf</code> should be pretty much empty (except for a lot of explanatory comments). All you need to do is capture the output from an mdadm query command and append it to the end of <code>mdadm.conf</code>.<br />
<br />
<pre><br />
# mdadm -D --scan >>/etc/mdadm.conf<br />
</pre><br />
<br />
That's it. You can exit your chroot now by hitting <code>CTRL-D</code> or typing <code>exit</code>.<br />
<br />
=== Reboot ===<br />
<br />
The hard part is all over! Now remove the CD from your CD-ROM drive, and type:<br />
<pre><br />
# reboot<br />
</pre><br />
<br />
=== Install Grub on the Alternate Boot Drives===<br />
<br />
Once you've successfully booted your new system for the first time, you will want to install Grub onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from another drive. Log in to your new system as root and do:<br />
<br />
<pre><br />
# grub<br />
grub> device (hd0) /dev/hdb<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> device (hd0) /dev/hdc<br />
grub> root (hd0,0)<br />
grub> setup (hd0)<br />
grub> quit<br />
</pre><br />
<br />
=== Archive your Filesystem Partition Scheme ===<br />
<br />
Now that you're done, it's worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. You do this with the <code>sfdisk</code> tool and the following steps:<br />
<br />
<pre><br />
# mkdir /etc/partitions<br />
# sfdisk --dump /dev/hda >/etc/partitions/disc0.partitions<br />
# sfdisk --dump /dev/hdb >/etc/partitions/disc1.partitions<br />
# sfdisk --dump /dev/hdc >/etc/partitions/disc2.partitions<br />
</pre><br />
<br />
== Management ==<br />
<br />
For LVM management, please have a look at [[Lvm]]<br />
<br />
== Mounting from a Live CD ==<br />
<br />
If you want to mount your RAID partition from a Live CD, use<br />
<br />
<pre><br />
mdadm --assemble /dev/md0 /dev/sda3 /dev/sdb3 /dev/sdc3<br />
</pre><br />
<br />
(or whatever mdX and drives apply to you)<br />
<br />
== Conclusion==<br />
<br />
You're done! I hope you've succeeded in setting up Arch Linux on your server with RAID and LVM!<br />
<br />
== Credits==<br />
<br />
This document was written by Paul Mattal with with significant help from others. Comments and suggestions are welcome at paul at archlinux dot org.<br />
<br />
Thanks to all who have contributed information and suggestions! This includes:<br />
<br />
* Carl Chave<br />
* Guillaume Darbonne</div>Tiny