Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
(Resources)
Line 124: Line 124:
 
   ioemu
 
   ioemu
 
network parameter from dom0 side.
 
network parameter from dom0 side.
 +
 +
Warning : When HDD driver is loaded, SD* disk are renamed xvd* , think to correct that on /etc/fstab and grub menu.lst
 +
 +
I.E : sda1 = xvda1 , sdh4 = xvdh4
  
 
==Arch as Xen guest (PV mode)==
 
==Arch as Xen guest (PV mode)==

Revision as of 07:17, 28 February 2012

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.


Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어


External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

This document explains how to setup Xen within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.

Setting up Xen host (Dom0)

Installing required packages

A xenAUR package is available in the AUR.

Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. xen-toolsAUR is also available in the AUR.

Configuring GRUB

Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:

title Xen with Arch Linux
root (hd0,0)
kernel /xen-version.gz dom0_mem=524288
module /vmlinuz26-xen-dom0 root=/dev/sda2 ro console=tty0
module /kernel26-xen-dom0.img

where dom0_mem, console, and vga are optional, customizable parameters.

The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:

MODULES="... xen-blkfront ..."

So, next step is to reboot into the xen kernel.

Next step: start xend:

# rc.d start xend

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

Configuring GRUB2

This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:

# (2) Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=2048M
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
    module /boot/kernel26-xen-dom0.gz
}

If you had success when booting up into the Dom0 kernel, we can continue.

Adding DomU instances

The basic idea behind adding a DomU is as follows. We must get the DomU kernels, allocate space for the virtual hard disk, create a configuration file for the DomU, and finally start the DomU with xm.

## /dev/sdb1 is an example of a block device
$ mkfs.ext4 /dev/sdb1    ## format partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir -p /tmp/install/{dev,proc,sys} /tmp/install/var/lib/pacman /tmp/install/var/cache/pacman/pkg
$ mount -o bind /dev /tmp/install/dev
$ mount -t proc none /tmp/install/proc
$ mount -o bind /sys /tmp/install/sys
$ pacman -Sy -r /tmp/install --cachedir /tmp/install/var/cache/pacman/pkg -b /tmp/install/var/lib/pacman base
$ cp -r /etc/pacman* /tmp/install/etc
$ chroot /tmp/install /bin/bash
$ vi /etc/resolv.conf
$ vi /etc/fstab
    /dev/xvda               /           ext4    defaults                0       1

$ vi /etc/inittab
    c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
    #c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux
    #c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux
    #c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux
    #c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux
    #c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux
    #c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux


$ exit  ## exit chroot
$ umount /tmp/install/dev
$ umount /tmp/install/proc
$ umount /tmp/install/sys
$ umount /tmp/install

If not starting from a fresh install and one wants to rsync from an existing system:

$ mkfs.ext4 /dev/sdb1    ## format lv partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir /tmp/install/{proc,sys}
$ chmod 555 /tmp/install/proc
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/

$ vi /etc/xen/dom01     ## create config file
    #  -*- mode: python; -*-
    kernel = "/boot/vmlinuz26"
    ramdisk = "/boot/kernel26.img"
    memory = 1024
    name = "dom01"
    vif = [ 'mac=00:16:3e:00:01:01' ]
    disk = [ 'phy:/dev/sdb1,xvda,w' ]
    dhcp="dhcp"
    hostname = "ooga"
    root = "/dev/xvda ro"

$ xm create -c dom01

Hardware virtualization

If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

Arch as Xen guest (PVHVM mode)

For kernel version greater than 3.0 you should add

 MODULES = (... xen-platform-pci xen_netfront xen-blkfront xen-pcifront xenfs ...)

whenever the kernel fails to detect the harddisk drive or network card.

If the network card fails to load anyway or shows a MAC address of 00:00:00:00:00:00 try to add

 xen_emul_unplug=never

to your kernel parameters or remove

 ioemu

network parameter from dom0 side.

Warning : When HDD driver is loaded, SD* disk are renamed xvd* , think to correct that on /etc/fstab and grub menu.lst

I.E : sda1 = xvda1 , sdh4 = xvdh4

Arch as Xen guest (PV mode)

To get paravirtualization you need a kernel with Xen guest support (DomU) enabled. By default, stock x86_64 linux and kernel26-lts from core have it enabled as modules. However, stock i686 linux and kernel26-lts do not support DomU due to default HIGHMEM option (https://bugs.archlinux.org/task/24207?project=1).

For i686 DomU, you may install linux-xen or kernel26-lts-xen from AUR. Both of them have Xen guest support enabled.

For Arch running on XenServer, you may also install optional xe-guest-utilities (XenServer Tools) from AUR.

On Arch guest (DomU)

Uncomment the following line in /etc/inittab to enable console login:

 h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux

Edit /boot/grub/menu.lst (examples, modify it according to your own config):

For x86_64 linux:

 # (0) Arch Linux (DomU)
 title  Arch Linux (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux.img

For x86_64 linux-lts:

 # (0) Arch Linux LTS (DomU)
 title  Arch Linux LTS (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-lts root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-lts.img

For i686 linux-xen:

 # (0) Arch Linux Xen (DomU)
 title  Arch Linux Xen (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-xen.img

For i686 linux-lts-xen:

 # (0) Arch Linux LTS Xen (DomU)
 title  Arch Linux LTS Xen (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-lts-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-lts-xen.img

Edit /etc/fstab and modify sd* to xvd* (e.g., sda1 to xvda1)

Add the following xen modules to your initcpio by appending the following to MODULES in /etc/mkinitcpio.conf: "xen-blkfront xen-fbfront xenfs xen-netfront xen-kbdfront" and rebuild your initramfs:

For x86_64 linux:

 mkinitcpio -p linux

For x86_64 linux-lts:

 mkinitcpio -p linux-lts

For i686 linux-xen:

 mkinitcpio -p linux-xen

For i686 linux-lts-xen:

 mkinitcpio -p linux-lts-xen

Note: you may also compile Xen guest support in your own custom kernel (not as modules). If that is the case, you then do not need to add any Xen modules in /etc/mkinitcpio.conf.

xe-guest-utilities (XenServer Tools)

To use xe-guest-utilities, add xenfs mount point into /etc/fstab:

 xenfs                  /proc/xen     xenfs     defaults            0      0

and add xe-linux-distribution to the DAEMONS array in /etc/rc.conf.

Now shutdown guest, login on host as root.

On Xen host (Dom0)

Get <vm uuid>:

 # xe vm-list

Change mode to PV with commands:

 # xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
 # xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub

Now you may boot your VM guest and it will be working (hopefully) in PV mode.

Notes

  • pygrub does not show boot menu. And some version pygrub does not support lzma compressed stock kernel. (https://bbs.archlinux.org/viewtopic.php?id=118525)
  • To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")
  • If you want to return to hardware VM, set HVM-boot-policy="BIOS order"
  • If you get a kernel panic when booting Xen and it suggests 'use apic="debug" and send an error report', try setting noapic on the kernel line in menu.lst

Useful Packages

  • Virtual Machine Manager - a desktop user interface for managing virtual machines: virt-manager
  • Open source multiplatform clone of XenCenter frontend (svn version): openxencenter-svn
  • Xen Cloud Platform frontend: xvp

Resources