Xen

From ArchWiki
Revision as of 01:37, 24 October 2012 by Cbradski2012 (Talk | contribs) (Create a systemd service file)

Jump to: navigation, search

This document explains how to setup Xen 4.2 within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.

Types of Virtualization Available with Xen

Paravirtual (PV)

Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster as they do not have to run in emulated hardware.

Hardware Virtual (HVM)

For hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

Paravirtual on Hardware (PV on HM)

There is a third mode which runs Xen on top of a HardwareVirtual guest.

Recommended Practices

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

Obtaining Xen

Xen is currently unmaintained, so for the moment it must be built from source.

Building and Installing Xen Hypervisor and Dom0 Host from Source

Xen recommends that a Xen host (dom0) is 64-bit, guests may be either 32-bit or 64-bit. To build such a system requires a mixed 64/32-bit installation and packages from the the Community repository; the host uses a network bridge and a modified entry in the bootloader configuration file (for example, grub.cfg). These notes assume an installation using systemd is in use, as is the default for a new installation of Arch. For these reasons, you may prefer to make a fresh installation of Arch on which to build and install Xen.

Building Xen

Building and installing Xen significantly modifies your system. Xen is an established program, but Xen 4.2 is extremely new. Consider Xen 4.2 on an Arch system to be untested. Consider yourself to be an alpha tester, make a throw-away Arch system for the Xen installation.

It is imperative to backup and highly recommended to make a fresh installation of Arch on which to install Xen.

The build process installs additional source from git, so a working internet connection is required.

Edit /etc/pacman.conf to uncomment entries under repositries for multilib and community (three lines each). Prepare for and perform a full system upgrade (pacman -Syu). Install packages listed under 'Required packages for build'. Download Xen Hypervisor 4.2 tarball from http://xen.org/products/downloads.html. Unpack the tarball to a suitable location (tar xjf <path/to/tarball> location). The Xen documentation recommends building Xen as root.

# cd xen-4.2.0
# PYTHON=/usr/bin/python2
# export PYTHON
# ./configure
# make world

# cd dist
# chmod -R -s install/
# rm install/etc/init.d/xend
# mv install/etc/init.d install/etc/conf.d

If installing to another Arch system, make a tarball and copy it over:

# cd ..
# tar cjf ~/xen-dist-4.2.bz2 dist/

copy the tarball to the other installation, boot into it and use 'tar xjf xen-dist-4.2.bz2 .' to unpack
then install packages listed under 'Packages required for host'

Now change to the 'dist' directory and install

# cd dist
# ./install.sh

Create a systemd service file

Create a file /etc/systemd/system/xencommons.service containing:

[Unit]
Description=Xen startup script
[Service]
ExecStart=/etc/conf.d/xencommons
[Install]
WantedBy=multi-user.target
# systemctl enable xencommons.service

set a menuentry as described in Bootloader Configuration

reboot into the new Xen system and check all is well:

# xl list

Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  1024     2     r-----       6.1

Required packages for building Xen

base-devel zlib lzo2 python2 ncurses openssl libx11 yajl 
libaio glib2 base-devel bridge-utils iproute gettext
dev86 bin86 iasl markdown git wget

optional packages:  ocaml ocaml-findlib

Required packages for Xen host

bridge-utils lzo2 bluez vde2 sdl libaio

Bootloader Configuration

The menuentry for a Xen system starts a Xen kernel before starting the main host's kernel.

grub2

Example non-xen menuentry for LVM

menuentry 'Arch ' {
  insmod part_gpt
  insmod lvm
  insmod ext2
  set root='lvm/vg0-arch'
  linux /boot/vmlinuz-linux root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
  initrd /boot/initramfs-linux.img
}

The menuentry to boot the same arch system after Xen has been installed

menuentry 'Arch Xen 4.2' {
  insmod lvm
  insmod part_gpt
  insmod ext2
  set root='(lvm/vg0-arch)'
  multiboot       /boot/xen.gz placeholder dom0_mem=1024M
  module  /boot/vmlinuz-linux placeholder root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
  module  /boot/initramfs-linux.img
}

Example for a physical partition

Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=1024M
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sdaY ro
    module /boot/initramfs-linux-xen-dom0.img
}

Set up Networking

Network Bridge

Xen expects a bridge connection named xenbr0 to have been configured. Using dhcp throughout simplifiesthings while we get everything working.

# cd /etc/network.d
# cp examples/bridge xen-bridge

make the following changes to xen-bridge:

INTERFACE="xenbr0"
BRIDGE_INTERFACE="eth0"
DESCRIPTION="Xen bridge connection"

assuming your existing eth0 connection is called eth0-dhcp edit /etc/conf.d/netconfig

NETWORKS=(eth0-dhcp xen-bridge)

restart the network:

systemctl restart netcfg.service

when the prompt returns, check all is well:

ip addr show
brctl show

Creating Guest Domains (domU)

Creating Paravirtualized (PV) Guests

The general procedure is: perform a normal or minimal installation of the distro that will become a guest; copy its kernel/initrd to a directory on the host; modify its /etc/fstab to use the virtual disk; create a config file for xl.

Example for Debian squeeze

Install Debian 6.0 (do not bother with graphical interface, install as little as possible) n.b. Squeeze has softlinks (vmlinuz and initrd.img) in its root directory to the current kernel, so check you have copied a real kernel, and not just a link!

# mkdir /tmp/squeeze
# mkdir -p /var/lib/xen/images/squeeze
# mount -text4 /path/to/squeeze /tmp/squeeze/
# cp /tmp/squeeze/vmlinuz /tmp/squeeze/initrd.img /var/lib/xen/images/squeeze

edit /tmp/squeeze/etc/fstab change its root entry to begin with /dev/xvda1

# /dev/xvda1 / ext4 noatime,nodiratime,errors=remount-ro 0 1
# cp /etc/xen/xlexample.pvlinux /etc/xen/pv-squeeze.cfg

edit /etc/xen/pv-squeeze.cfg with the following changes

kernel=/var/lib/xen/images/squeeze/vmlinuz
ramdisk=/var/lib/xen/images/squeeze/initrd.img
disk = [ '/dev/path-to-your-squeeze-volume-or-partition,raw,xvda1,rw' ]

Special requirements for the Arch kernel

The default Arch initramfs images lack essential xen modules. In the guest install, we need to add the following to mkinitcpio.conf

MODULES = "xen-blkfront xen-fbfront xen-netfront xen-kbdfront"

and then rebuild initramfs-linux.img.

# mkinitcpio -p linux

Running a Guest

Using the Debian Squeeze example, start the guest domU and a console

# xl create /etc/xen/pv-squeeze.cfg

Check all is well:

# xl list

Name            ID   Mem VCPUs	    State	Time(s)
Domain-0         0  1024     2     r-----      26.2
pv-squeeze       1   123     2     -b----       1.5

Start a console:

# xl console /etc/xen/pv-squeeze.cfg

( example output)
[
   0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-46) (dannf@debian.org) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Sun Sep 23 13:49:30 UTC 2012
[    0.000000] Command line: root=/dev/xvda1
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
...

Useful xl command examples

# xl top
# xl list
# xl shutdown pv-squeeze
# xl destroy pv-squeeze

Resources