Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
(Clarified the built-in support of Dom0 on the x64 kernel.)
m (Prepare a Dom0 kernel: overuse of cat)
Line 22: Line 22:
  
 
To check if your running kernel can be used as a Dom0 kernel. Run the following command:
 
To check if your running kernel can be used as a Dom0 kernel. Run the following command:
  zcat /proc/config.gz | grep CONFIG_XEN
+
  zgrep CONFIG_XEN /proc/config.gz
  
 
If there are lines like {{ic|<nowiki>CONFIG_XEN=y</nowiki>}} and {{ic|<nowiki>CONFIG_XEN_DOM0=y</nowiki>}} in the output, your kernel is good. If not, you may need to compile a kernel from source, with Xen enabled. See [[Kernels/Compilation/Traditional]] or [[Kernels/Compilation/Arch Build System]] for further instructions.
 
If there are lines like {{ic|<nowiki>CONFIG_XEN=y</nowiki>}} and {{ic|<nowiki>CONFIG_XEN_DOM0=y</nowiki>}} in the output, your kernel is good. If not, you may need to compile a kernel from source, with Xen enabled. See [[Kernels/Compilation/Traditional]] or [[Kernels/Compilation/Arch Build System]] for further instructions.
Line 29: Line 29:
 
  MODULES="... xen-blkfront ..."
 
  MODULES="... xen-blkfront ..."
  
So, next step is to reboot into the xen kernel.  
+
So, next step is to reboot into the xen kernel.
  
 
===Configuring GRUB2===
 
===Configuring GRUB2===

Revision as of 18:33, 20 September 2012

This document explains how to setup Xen within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.

Setting up Xen host (Dom0)

Installing required packages

A xenAUR package is available in the AUR, containing the hypervisor itself.

Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. xen-toolsAUR is also available in the AUR.

Prepare a Dom0 kernel

The stock x64 Arch kernel is compiled with Dom0 support by default. The stock i686 kernel, however, is not. It is highly recommended that you run your Dom0 on x64 architecture. This will not preclude the ability to run i686 DomU's, and will increase performance of all virtuals.

To check if your running kernel can be used as a Dom0 kernel. Run the following command:

zgrep CONFIG_XEN /proc/config.gz

If there are lines like CONFIG_XEN=y and CONFIG_XEN_DOM0=y in the output, your kernel is good. If not, you may need to compile a kernel from source, with Xen enabled. See Kernels/Compilation/Traditional or Kernels/Compilation/Arch Build System for further instructions.

The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:

MODULES="... xen-blkfront ..."

So, next step is to reboot into the xen kernel.

Configuring GRUB2

GRUB2 must be configured so that the Xen hypervisor is booted followed by the dom0 kernel (which may now be the standard Linux kernel already present in /boot). Add the following entry to /boot/grub/grub.cfg (customizing the locations of the kernel and modules according to what is present in your /boot directory):

# (2) Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=2048M
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
    module /boot/kernel26-xen-dom0.gz
}

Next step: start xend:

# rc.d start xend

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

If you had success when booting up into the Dom0 kernel, we can continue.

Configuring Syslinux

To load Xen-based kernels you have to use the Syslinux multiboot mboot.c32 module. Copy the COM32 module to your syslinux folder:

# cp /usr/lib/syslinux/mboot.c32 /boot/syslinux/

If /boot is the same partition as /, a symlink will also work:

# ln -s /usr/lib/syslinux/mboot.c32 /boot/syslinux/

Then add the following entry to syslinux.cfg (customizing the locations of the kernel and modules according to what is present in your /boot directory):

# nano /boot/syslinux/syslinux.cfg
LABEL arch
     MENU LABEL Arch Linux (XEN)
     KERNEL mboot.c32
     APPEND ../xen-<version>.gz dom0_mem=262144 --- ../vmlinuz-linux console=tty0 root=/dev/mapper/vg0-root ro --- ../initramfs-linux.img

Adding DomU instances

The basic idea behind adding a DomU is as follows. We must get the DomU kernels, allocate space for the virtual hard disk, create a configuration file for the DomU, and finally start the DomU with xm.

## /dev/sdb1 is an example of a block device
$ mkfs.ext4 /dev/sdb1    ## format partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir -p /tmp/install/{dev,proc,sys} /tmp/install/var/lib/pacman /tmp/install/var/cache/pacman/pkg
$ mount -o bind /dev /tmp/install/dev
$ mount -t proc none /tmp/install/proc
$ mount -o bind /sys /tmp/install/sys
$ pacman -Sy -r /tmp/install --cachedir /tmp/install/var/cache/pacman/pkg -b /tmp/install/var/lib/pacman base
$ cp -r /etc/pacman* /tmp/install/etc
$ chroot /tmp/install /bin/bash
$ vi /etc/resolv.conf
$ vi /etc/fstab
    /dev/xvda               /           ext4    defaults                0       1

$ vi /etc/inittab
    c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
    #c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux
    #c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux
    #c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux
    #c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux
    #c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux
    #c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux


$ exit  ## exit chroot
$ umount /tmp/install/dev
$ umount /tmp/install/proc
$ umount /tmp/install/sys
$ umount /tmp/install

If not starting from a fresh install and one wants to rsync from an existing system:

$ mkfs.ext4 /dev/sdb1    ## format lv partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir /tmp/install/{proc,sys}
$ chmod 555 /tmp/install/proc
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/

$ vi /etc/xen/dom01     ## create config file
    #  -*- mode: python; -*-
    kernel = "/boot/vmlinuz26"
    ramdisk = "/boot/kernel26.img"
    memory = 1024
    name = "dom01"
    vif = [ 'mac=00:16:3e:00:01:01' ]
    disk = [ 'phy:/dev/sdb1,xvda,w' ]
    dhcp="dhcp"
    hostname = "ooga"
    root = "/dev/xvda ro"

$ xm create -c dom01

Hardware virtualization

If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

Arch as Xen guest (PVHVM mode)

For kernel version greater than 3.0 you should add

 MODULES = (... xen-platform-pci xen_netfront xen-blkfront xen-pcifront xenfs ...)

whenever the kernel fails to detect the harddisk drive or network card.

If the network card fails to load anyway or shows a MAC address of 00:00:00:00:00:00 try to add

 xen_emul_unplug=never

to your kernel parameters or remove

 ioemu

network parameter from dom0 side.

Warning : When HDD driver is loaded, SD* disk are renamed xvd* , think to correct that on /etc/fstab and grub menu.lst

I.E : sda1 = xvda1 , sdh4 = xvdh4

Arch as Xen guest (PV mode)

To get paravirtualization you need a kernel with Xen guest support (DomU) enabled. By default, stock x86_64 linux and kernel26-lts from core have it enabled as modules. However, stock i686 linux and kernel26-lts do not support DomU due to default HIGHMEM option (https://bugs.archlinux.org/task/24207?project=1).

For i686 DomU, you may install linux-xen or kernel26-lts-xen from AUR. Both of them have Xen guest support enabled.

For Arch running on XenServer, you may also install optional xe-guest-utilities (XenServer Tools) from AUR.

On Arch guest (DomU)

Uncomment the following line in /etc/inittab to enable console login:

 h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux

Edit /boot/grub/menu.lst (examples, modify it according to your own config):

For x86_64 linux:

 # (0) Arch Linux (DomU)
 title  Arch Linux (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux.img

For x86_64 linux-lts:

 # (0) Arch Linux LTS (DomU)
 title  Arch Linux LTS (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-lts root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-lts.img

For i686 linux-xen:

 # (0) Arch Linux Xen (DomU)
 title  Arch Linux Xen (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-xen.img

For i686 linux-lts-xen:

 # (0) Arch Linux LTS Xen (DomU)
 title  Arch Linux LTS Xen (DomU)
 root   (hd0,0)
 kernel /boot/vmlinuz-linux-lts-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/initramfs-linux-lts-xen.img

Edit /etc/fstab and modify sd* to xvd* (e.g., sda1 to xvda1)

Add the following xen modules to your initcpio by appending the following to MODULES in /etc/mkinitcpio.conf: "xen-blkfront xen-fbfront xenfs xen-netfront xen-kbdfront" and rebuild your initramfs:

For x86_64 linux:

 mkinitcpio -p linux

For x86_64 linux-lts:

 mkinitcpio -p linux-lts

For i686 linux-xen:

 mkinitcpio -p linux-xen

For i686 linux-lts-xen:

 mkinitcpio -p linux-lts-xen

Note: you may also compile Xen guest support in your own custom kernel (not as modules). If that is the case, you then do not need to add any Xen modules in /etc/mkinitcpio.conf.

xe-guest-utilities (XenServer Tools)

To use xe-guest-utilities, add xenfs mount point into /etc/fstab:

 xenfs                  /proc/xen     xenfs     defaults            0      0

and add xe-linux-distribution to the DAEMONS array in /etc/rc.conf.

Now shutdown guest, login on host as root.

On Xen host (Dom0)

Get <vm uuid>:

 # xe vm-list

Change mode to PV with commands:

 # xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
 # xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub

Now you may boot your VM guest and it will be working (hopefully) in PV mode.

Notes

  • pygrub does not show boot menu. And some version pygrub does not support lzma compressed stock kernel. (https://bbs.archlinux.org/viewtopic.php?id=118525)
  • To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")
  • If you want to return to hardware VM, set HVM-boot-policy="BIOS order"
  • If you get a kernel panic when booting Xen and it suggests 'use apic="debug" and send an error report', try setting noapic on the kernel line in menu.lst

Useful Packages

  • Virtual Machine Manager - a desktop user interface for managing virtual machines: virt-manager
  • Open source multiplatform clone of XenCenter frontend (svn version): openxencenter-svn
  • Xen Cloud Platform frontend: xvp

Resources