From ArchWiki
Revision as of 13:54, 20 September 2011 by Eigrad (Talk | contribs)

Jump to: navigation, search

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.

Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어

External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: kernel26 (Discuss in Talk:Xen#)

This document explains how to setup Xen within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on a recent version of Linux kernel 2.6 and there is a more unstable -dev version as well; hardware must, of course, be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (unprivileged) domains can be started and controlled from dom0.

Setting up Xen

Installing the necessary packages

Before building xen, be sure you have Template:Package Official, Template:Package Official, Template:Package Official, and Template:Package Official installed.

pacman -S gcc make patch python2

The new xen package, contains Xen 4 and resolves almost all necessary dependencies automatically. But, due to changes in the official Python version of Arch Linux, some old scripts show errors when they are executed. To solve this issue, download python2.5 from the AUR.

And, when we are asked to edit PKGBUILD (preferably with nano), we must not forget to replace this:

make PYTHON=python2 DESTDIR=$pkgdir  install-xen
make PYTHON=python2 DESTDIR=$pkgdir  install-tools
make PYTHON=python2 DESTDIR=$pkgdir  install-docs

with this:

make PYTHON=python2.5 DESTDIR=$pkgdir  install-xen
make PYTHON=python2.5 DESTDIR=$pkgdir  install-tools
make PYTHON=python2.5 DESTDIR=$pkgdir  install-docs
sed -i -e "s|#![ ]*/usr/bin/python$|#!/usr/bin/python2.5|" \
-e "s|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2.5|" \
$(find $pkgdir -name '*.py')

Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. xen-tools is also available in the AUR.

The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.

Please note: currently the kernel26-xen-dom0 is marked as out of date in AUR, and is not yet updated to 2.6.36. So you have to run through the new parameters of 2.6.36 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked.

The building part has been finished. Now you can configure Grub and boot into the kernel that has just been built.

Configuring GRUB

Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to Template:Filename:

title Xen with Arch Linux
root (hd0,X)
kernel /xen.gz dom0_mem=524288
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0
module /kernel26-xen-dom0.img

where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of Template:Filename you can also fill in Template:Filename.

The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in Template:Filename:

MODULES="... xen-blkfront ..."

So, next step is to reboot into the xen kernel.

Next step: start xend:

# /etc/rc.d/xend start

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

Configuring GRUB2

This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:

# (2) Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=2048M
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
    module /boot/kernel26-xen-dom0.gz

If you had success when booting up into the dom0 kernel, we can continue.

Add domU instances

The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.

$ mkfs.ext4 /dev/sdb1    ## format partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir -p /tmp/install/{dev,proc,sys} /tmp/install/var/lib/pacman /tmp/install/var/cache/pacman/pkg
$ mount -o bind /dev /tmp/install/dev
$ mount -t proc none /tmp/install/proc
$ mount -o bind /sys /tmp/install/sys
$ pacman -Sy -r /tmp/install --cachedir /tmp/install/var/cache/pacman/pkg -b /tmp/install/var/lib/pacman base
$ cp -r /etc/pacman* /tmp/install/etc
$ chroot /tmp/install /bin/bash
$ vi /etc/resolv.conf
$ vi /etc/fstab
    /dev/xvda               /           ext4    defaults                0       1

$ vi /etc/inittab
    c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
    #c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux
    #c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux
    #c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux
    #c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux
    #c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux
    #c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux

$ exit  ## exit chroot
$ umount /tmp/install/dev
$ umount /tmp/install/proc
$ umount /tmp/install/sys
$ umount /tmp/install

If not starting from a fresh install and one wants to rsync from an existing system:

$ mkfs.ext4 /dev/sdb1    ## format lv partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir /tmp/install/{proc,sys}
$ chmod 555 /tmp/install/proc
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/

$ vi /etc/xen/dom01     ## create config file
    #  -*- mode: python; -*-
    kernel = "/boot/vmlinuz26"
    ramdisk = "/boot/kernel26.img"
    memory = 1024
    name = "dom01"
    vif = [ 'mac=00:16:3e:00:01:01' ]
    disk = [ 'phy:/dev/sdb1,xvda,w' ]
    hostname = "ooga"
    root = "/dev/xvda ro"

$ xm create -c dom01

Hardware Virtualization

If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel-VT or AMD-V virtualization support. In order to verify this, run the following commands on the host system:

For Intel CPUs:

grep vmx /proc/cpuinfo


grep svm /proc/cpuinfo

If neither of the above commands produce output then it is likely these features are unavailable and that your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system’s BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, enable it, boot the system and repeat the above commands.

Arch as Xen guest (PV mode)

To get paravirtualization you need to install:

Change mode to PV with commands (on dom0):

 xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
 xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub

Edit Template:Filename and add kernel26-xen:

 # (1) Arch Linux (domU)
 title  Arch Linux (domU)
 root   (hd0,0)
 kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/kernel26-xen.img

Add the following xen modules to your initcpio by appending the following to Template:Codeline in Template:Filename: Template:Codeline and rebuild your initramfs:

 mkinitcpio -p kernel26-xen

Uncomment the following line in Template:Filename to enable console login:

 h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux


To use xe-guest-utilities, add xenfs mount point into Template:Filename:

 xenfs                  /proc/xen     xenfs     defaults            0      0

and add Template:Codeline to the Template:Codeline array in Template:Filename.


Xen Management Tools

The "Virtual Machine Manager" application is a desktop user interface for managing virtual machines. It presents a summary view of running domains, their live performance & resource utilization statistics. The detailed view graphs performance & utilization over time. Wizards enable the creation of new domains, and configuration & adjustment of a domain's resource allocation & virtual hardware. An embedded VNC client viewer presents a full graphical console to the guest domain.

yaourt -S virt-manager-light

Useful Packages

As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)

  • Open source multiplatform clone of XenCenter frontend: openxencenter
  • Open source multiplatform clone of XenCenter frontend (svn version): openxencenter-svn
  • Xen Cloud Platform frontend: xvp