Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
(Kernel rename.)
Line 1: Line 1:
 +
 
[[Category:Virtualization]]
 
[[Category:Virtualization]]
 
[[Category:Kernel]]
 
[[Category:Kernel]]
Line 11: Line 12:
 
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader and allows several operating systems to run simultaneously on top of it.  Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.
 
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader and allows several operating systems to run simultaneously on top of it.  Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.
  
==Setting up Xen host (Dom0)==
+
==Types of Virtualization Available with Xen==
===Installing required packages===
+
===Paravirtual (PV)===
 +
Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster as they do not have to run in emulated hardware.
 +
===Hardware Virtual (HVM)===
 +
For hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:
 +
grep -E "(vmx|svm)" --color=always /proc/cpuinfo
 +
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.
  
A {{aur|xen}} package is available in the [[AUR]], containing the hypervisor itself.
+
===Paravirtual on Hardware (PV on HM)===
 
+
There is a third mode which runs Xen on top of a HardwareVirtual guest.
Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. {{aur|xen-tools}} is also available in the AUR.
+
 
+
===Prepare a Dom0 kernel===
+
The stock x64 Arch kernel is compiled with Dom0 support by default. The stock i686 kernel, however, is not. It is highly recommended that you run your Dom0 on x64 architecture. This will not preclude the ability to run i686 DomU's, and will increase performance of all virtuals.
+
 
+
To check if your running kernel can be used as a Dom0 kernel. Run the following command:
+
  zgrep CONFIG_XEN /proc/config.gz
+
 
+
If there are lines like {{ic|<nowiki>CONFIG_XEN=y</nowiki>}} and {{ic|<nowiki>CONFIG_XEN_DOM0=y</nowiki>}} in the output, your kernel is good. If not, you may need to compile a kernel from source, with Xen enabled. See [[Kernels/Compilation/Traditional]] or [[Kernels/Compilation/Arch Build System]] for further instructions.
+
 
+
The standard arch kernel can be use to boot the domUs.  In order for this to work one must add 'xen-blkfront' to the modules array in {{ic|/etc/mkinitcpio.conf}}:
+
MODULES="... xen-blkfront ..."
+
 
+
So, next step is to reboot into the xen kernel.
+
 
+
===Configuring GRUB2===
+
 
+
GRUB2 must be configured so that the Xen hypervisor is booted followed by the dom0 kernel (which may now be the standard Linux kernel already present in /boot). Add the following entry to {{ic|/boot/grub/grub.cfg}} (customizing the locations of the kernel and modules according to what is present in your /boot directory):
+
# (2) Arch Linux(XEN)
+
menuentry "Arch Linux(XEN)" {
+
    set root=(hd0,X)
+
    multiboot /boot/xen.gz dom0_mem=2048M
+
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sdaY ro
+
    module /boot/initramfs-linux-xen-dom0.img
+
}
+
 
+
Next step: start xend:
+
# rc.d start xend
+
  
 +
==Recommended Practices==
 
Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.
 
Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.
  
If you had success when booting up into the Dom0 kernel, we can continue.
+
== Obtaining Xen ==
 +
Xen is currently unmaintained, so for the moment it must be built from source.  
  
===Configuring Syslinux===
+
== Building and Installing Xen Hypervisor and Dom0 Host from Source ==
 +
Xen recommends that a Xen host (dom0) is 64-bit, guests may be either 32-bit or 64-bit. To build such a system requires a mixed 64/32-bit installation and packages from the the Community repository; the host uses a network bridge and a modified entry in the bootloader configuration file (for example, grub.cfg). These notes assume an installation using systemd is in use, as is the default for a new installation of Arch. For these reasons, you may prefer to make a fresh installation of Arch on which to build and install Xen.
  
To load Xen-based kernels you have to use the [[Syslinux]] multiboot {{ic|mboot.c32}} module. Copy the COM32 module to your syslinux folder:
+
===Building Xen===
# cp /usr/lib/syslinux/mboot.c32 /boot/syslinux/
+
The build process installs additional source from git, so a working internet connection is required.
  
If {{ic|/boot}} is the same partition as {{ic|/}}, a symlink will also work:
+
Edit /etc/pacman.conf to uncomment entries under repositries for multilib and community (three lines each).
# ln -s /usr/lib/syslinux/mboot.c32 /boot/syslinux/
+
Prepare for and perform a full system upgrade (pacman -Syu).
 +
Install packages listed under 'Required packages for build'.
 +
Download Xen Hypervisor 4.2 tarball from http://xen.org/products/downloads.html.
 +
Unpack the tarball to a suitable location (tar xjf <path/to/tarball> location).
 +
The Xen documentation recommends building Xen as root.
  
Then add the following entry to {{ic|syslinux.cfg}} (customizing the locations of the kernel and modules according to what is present in your /boot directory):
+
# cd xen-4.2.0
 +
# PYTHON=/usr/bin/python2
 +
# export PYTHON
 +
# ./configure
 +
# make world
 +
 +
# cd dist
 +
# chmod -R -s install/
 +
# rm install/etc/init.d/xend
 +
# mv install/etc/init.d install/etc/conf.d
  
{{hc|# nano /boot/syslinux/syslinux.cfg|<nowiki>
+
If installing to another Arch installation:
LABEL arch
+
install packages listed under 'Packages required for host'
    MENU LABEL Arch Linux (XEN)
+
# cd ..
    KERNEL mboot.c32
+
# tar cjf ~/xen-dist-4.2.bz2 dist/
    APPEND ../xen-<version>.gz dom0_mem=262144 --- ../vmlinuz-linux console=tty0 root=/dev/mapper/vg0-root ro --- ../initramfs-linux.img</nowiki>}}
+
  
===Adding DomU instances===
+
copy tarball to required installation, boot into it and use 'tar xjf xen-dist-4.2.bz2 .' to unpack
 +
# cd dist
  
The basic idea behind adding a DomU is as follows. We must get the DomU kernels, allocate space for the virtual hard disk, create a configuration file for the DomU, and finally start the DomU with xm.
+
install packages listed under 'Required packages for Xen host
 +
# ./install.sh
  
<pre>
+
===Create a systemd service file===
## /dev/sdb1 is an example of a block device
+
Create a file /etc/systemd/systemd/system/xencommons.service containing:
$ mkfs.ext4 /dev/sdb1    ## format partition
+
[Unit]
$ mkdir /tmp/install
+
Description Xen startup script
$ mount /dev/sdb1 /tmp/install
+
[Service]
$ mkdir -p /tmp/install/{dev,proc,sys} /tmp/install/var/lib/pacman /tmp/install/var/cache/pacman/pkg
+
ExecStart=/etc/conf.d/xencommons
$ mount -o bind /dev /tmp/install/dev
+
[Install]
$ mount -t proc none /tmp/install/proc
+
WantedBy=multi-user.target
$ mount -o bind /sys /tmp/install/sys
+
$ pacman -Sy -r /tmp/install --cachedir /tmp/install/var/cache/pacman/pkg -b /tmp/install/var/lib/pacman base
+
$ cp -r /etc/pacman* /tmp/install/etc
+
$ chroot /tmp/install /bin/bash
+
$ vi /etc/resolv.conf
+
$ vi /etc/fstab
+
    /dev/xvda              /          ext4    defaults                0      1
+
  
$ vi /etc/inittab
+
# systemctl enable xencommons.service
    c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
+
    #c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux
+
    #c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux
+
    #c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux
+
    #c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux
+
    #c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux
+
    #c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux
+
  
 +
set a menuentry as described in Bootloader Configuration
  
$ exit  ## exit chroot
+
'''reboot into the new Xen system''' and check all is well:
$ umount /tmp/install/dev
+
# xl list
$ umount /tmp/install/proc
+
$ umount /tmp/install/sys
+
Name                                        ID  Mem VCPUs State Time(s)
$ umount /tmp/install
+
Domain-0                                    0  1024    2    r-----       6.1
</pre>
+
If not starting from a fresh install and one wants to rsync from an existing system:
+
<pre>
+
$ mkfs.ext4 /dev/sdb1    ## format lv partition
+
$ mkdir /tmp/install
+
$ mount /dev/sdb1 /tmp/install
+
$ mkdir /tmp/install/{proc,sys}
+
$ chmod 555 /tmp/install/proc
+
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/
+
  
$ vi /etc/xen/dom01    ## create config file
+
=== Required packages for building Xen ===
    # -*- mode: python; -*-
+
  base-devel zlib lzo2 python2 ncurses openssl libx11 yajl
    kernel = "/boot/vmlinuz-linux"
+
libaio glib2 base-devel bridge-utils iproute gettext
    ramdisk = "/boot/initramfs-linux.img"
+
dev86 bin86 iasl markdown git wget
    memory = 1024
+
    name = "dom01"
+
optional packages: ocaml ocaml-findlib
    vif = [ 'mac=00:16:3e:00:01:01' ]
+
    disk = [ 'phy:/dev/sdb1,xvda,w' ]
+
    dhcp="dhcp"
+
    hostname = "ooga"
+
    root = "/dev/xvda ro"
+
  
$ xm create -c dom01
+
=== Required packages for Xen host ===
</pre>
+
bridge-utils lzo2 bluez vde2 sdl libaio
  
===Hardware virtualization===
+
== Bootloader Configuration ==
If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:
+
The menuentry for a Xen system starts a Xen kernel before starting the main host's kernel.  
grep -E "(vmx|svm)" --color=always /proc/cpuinfo
+
  
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.
+
=== grub2 ===
 +
Example non-xen menuentry for LVM
 +
menuentry 'Arch ' {
 +
  insmod part_gpt
 +
  insmod lvm
 +
  insmod ext2
 +
  set root='lvm/vg0-arch'
 +
  linux /boot/vmlinuz-linux root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
 +
  initrd /boot/initramfs-linux.img
 +
}
  
==Arch as Xen guest (PVHVM mode)==
+
The menuentry to boot the same arch system after Xen has been installed
For kernel version greater than 3.0 you should add
+
menuentry 'Arch Xen 4.2' {
  MODULES = (... xen-platform-pci xen_netfront xen-blkfront xen-pcifront xenfs ...)
+
  insmod lvm
whenever the kernel fails to detect the harddisk drive or network card.
+
  insmod part_gpt
 +
  insmod ext2
 +
  set root='(lvm/vg0-arch)'
 +
  multiboot      /boot/xen.gz placeholder dom0_mem=1024M
 +
  module  /boot/vmlinuz-linux placeholder root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
 +
  module  /boot/initramfs-linux.img
 +
}
  
If the network card fails to load anyway or shows a MAC address of 00:00:00:00:00:00 try to add
+
Example for a physical partition
  xen_emul_unplug=never
+
Arch Linux(XEN)
to your kernel parameters or remove
+
menuentry "Arch Linux(XEN)" {
  ioemu
+
    set root=(hd0,X)
network parameter from dom0 side.
+
    multiboot /boot/xen.gz dom0_mem=1024M
 
+
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sdaY ro
Warning : When HDD driver is loaded, SD* disk are renamed xvd* , think to correct that on /etc/fstab and grub menu.lst
+
    module /boot/initramfs-linux-xen-dom0.img
 
+
}
I.E : sda1 = xvda1 , sdh4 = xvdh4
+
 
+
==Arch as Xen guest (PV mode)==
+
To get paravirtualization you need a kernel with Xen guest support (DomU) enabled. By default, stock x86_64 linux and kernel26-lts from core have it enabled as modules. However, stock i686 linux and kernel26-lts do not support DomU due to default HIGHMEM option (https://bugs.archlinux.org/task/24207?project=1).
+
 
+
For i686 DomU, you may install [https://aur.archlinux.org/packages.php?ID=51622 linux-xen] or [https://aur.archlinux.org/packages.php?ID=55041 kernel26-lts-xen] from [[AUR]]. Both of them have Xen guest support enabled.
+
  
For Arch running on XenServer, you may also install optional [https://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities] (XenServer Tools) from [[AUR]].
+
==Set up Networking==
 +
=== Network Bridge ===
 +
Xen expects a bridge connection named xenbr0 to have been configured. Using dhcp throughout simplifiesthings while we get everything working.
  
===On Arch guest (DomU)===
+
# cd /etc/network.d
 +
# cp examples/bridge xen-bridge
  
Uncomment the following line in {{ic|/etc/inittab}} to enable console login:
+
make the following changes to xen-bridge:
  h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
+
INTERFACE="xenbr0"
 +
BRIDGE_INTERFACE="eth0"
 +
DESCRIPTION="Xen bridge connection"
  
Edit {{ic|/boot/grub/menu.lst}} (examples, modify it according to your own config):
+
assuming your existing eth0 connection is called eth0-dhcp
 +
edit /etc/conf.d/netconfig
 +
NETWORKS=(eth0-dhcp xen-bridge)
  
For x86_64 linux:
+
restart the network:
  # (0) Arch Linux (DomU)
+
  systemctl restart netcfg.service
  title Arch Linux (DomU)
+
  root  (hd0,0)
+
  kernel /boot/vmlinuz-linux root=/dev/xvda1 ro console=hvc0
+
  initrd /boot/initramfs-linux.img
+
  
For x86_64 linux-lts:
+
when the prompt returns, check all is well:
  # (0) Arch Linux LTS (DomU)
+
ip addr show
  title Arch Linux LTS (DomU)
+
  brctl show
  root  (hd0,0)
+
  kernel /boot/vmlinuz-linux-lts root=/dev/xvda1 ro console=hvc0
+
  initrd /boot/initramfs-linux-lts.img
+
  
For i686 [https://aur.archlinux.org/packages.php?ID=51622 linux-xen]:
+
== Creating Guest Domains (domU) ==
  # (0) Arch Linux Xen (DomU)
+
   
  title Arch Linux Xen (DomU)
+
=== Creating Paravirtualized (PV) Guests===
  root  (hd0,0)
+
The general procedure is:
  kernel /boot/vmlinuz-linux-xen root=/dev/xvda1 ro console=hvc0
+
perform a normal or minimal installation of the distro that will become a guest
  initrd /boot/initramfs-linux-xen.img
+
copy its kernel/initrd to a directory on the host
 +
modify its /etc/fstab to use the virtual disk
 +
create a config file for xl
  
For i686 [https://aur.archlinux.org/packages.php?ID=55784 linux-lts-xen]:
+
==== Example for Debian squeeze ====
  # (0) Arch Linux LTS Xen (DomU)
+
Install Debian 6.0 (do not bother with graphical interface, install as little as possible)
  title  Arch Linux LTS Xen (DomU)
+
n.b. Squeeze has softlinks (vmlinuz and initrd.img) in its root directory to the current kernel, so check you have copied a real kernel, and not just a link!
  root   (hd0,0)
+
# mkdir /tmp/squeeze
  kernel /boot/vmlinuz-linux-lts-xen root=/dev/xvda1 ro console=hvc0
+
# mkdir -p /var/lib/xen/images/squeeze
  initrd /boot/initramfs-linux-lts-xen.img
+
# mount -text4 /path/to/squeeze /tmp/squeeze/
 +
# cp /tmp/squeeze/vmlinuz /tmp/squeeze/initrd.img /var/lib/xen/images/squeeze
  
Edit {{ic|/etc/fstab}} and modify {{Ic|sd*}} to {{Ic|xvd*}} (e.g., {{Ic|sda1}} to {{Ic|xvda1}})
+
edit /tmp/squeeze/etc/fstab
 +
change its root entry to begin with /dev/xvda1
 +
# /dev/xvda1 / ext4 noatime,nodiratime,errors=remount-ro 0 1
  
Add the following xen modules to your initcpio by appending the following to {{Ic|MODULES}} in {{ic|/etc/mkinitcpio.conf}}: {{Ic|"xen-blkfront xen-fbfront xenfs xen-netfront xen-kbdfront"}} and rebuild your initramfs:
+
# cp /etc/xen/xlexample.pvlinux /etc/xen/pv-squeeze.cfg
  
For x86_64 linux:
+
edit /etc/xen/pv-squeeze.cfg with the following changes
  mkinitcpio -p linux
+
kernel=/var/lib/xen/images/squeeze/vmlinuz
For x86_64 linux-lts:
+
ramdisk=/var/lib/xen/images/squeeze/initrd.img
  mkinitcpio -p linux-lts
+
For i686 [https://aur.archlinux.org/packages.php?ID=51622 linux-xen]:
+
  mkinitcpio -p linux-xen
+
For i686 [https://aur.archlinux.org/packages.php?ID=55784 linux-lts-xen]:
+
  mkinitcpio -p linux-lts-xen
+
  
Note: you may also compile Xen guest support in your own custom kernel (not as modules). If that is the case, you then do not need to add any Xen modules in {{ic|/etc/mkinitcpio.conf}}.
+
disk = [ '/dev/path-to-your-squeeze-volume-or-partition,raw,xvda1,rw' ]
  
===xe-guest-utilities (XenServer Tools)===
+
=== Special requirements for the Arch kernel ===
To use xe-guest-utilities, add xenfs mount point into {{ic|/etc/fstab}}:
+
The default Arch initramfs images lack essential xen modules.
  xenfs                  /proc/xen     xenfs    defaults            0      0
+
In the guest install, we need to add the following to mkinitcpio.conf
and add {{Ic|xe-linux-distribution}} to the {{Ic|DAEMONS}} array in {{ic|/etc/rc.conf}}.
+
MODULES = "xen-blkfront xen-fbfront xen-netfront xen-kbdfront"
  
Now shutdown guest, login on host as root.
+
and then rebuild initramfs-linux.img.
 +
# mkinitcpio -p linux
  
===On Xen host (Dom0)===
+
== Running a Guest ==
Get <vm uuid>:
+
Using the Debian Squeeze example, start the guest domU and a console
  # xe vm-list
+
# xl create /etc/xen/pv-squeeze.cfg
Change mode to PV with commands:
+
  # xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
+
  # xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub
+
  
Now you may boot your VM guest and it will be working (hopefully) in PV mode.
+
Check all is well:
 +
# xl list
 +
 +
Name            ID  Mem VCPUs     State Time(s)
 +
Domain-0        0  1024    2    r-----      26.2
 +
pv-squeeze      1  123    2    -b----      1.5
  
===Notes===
+
Start a console:
* pygrub does not show boot menu. And some version pygrub does not support lzma compressed stock kernel. (https://bbs.archlinux.org/viewtopic.php?id=118525)
+
# xl console /etc/xen/pv-squeeze.cfg
* To avoid hwclock error messages, set {{Ic|1=HARDWARECLOCK="xen"}} in {{ic|/etc/rc.conf}} (actually you can use any value here except {{Ic|"UTC"}} and {{Ic|"localtime"}})
+
* If you want to return to hardware VM, set {{Ic|1=HVM-boot-policy="BIOS order"}}
+
( example output)
* If you get a kernel panic when booting Xen and it suggests 'use {{Ic|1=apic="debug"}} and send an error report', try setting {{Ic|noapic}} on the kernel line in {{ic|menu.lst}}
+
[
 +
    0.000000] Initializing cgroup subsys cpuset
 +
[    0.000000] Initializing cgroup subsys cpu
 +
[    0.000000] Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-46) (dannf@debian.org) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Sun Sep 23 13:49:30 UTC 2012
 +
[    0.000000] Command line: root=/dev/xvda1
 +
[    0.000000] KERNEL supported cpus:
 +
[    0.000000]  Intel GenuineIntel
 +
[    0.000000]  AMD AuthenticAMD
 +
[    0.000000]  Centaur CentaurHauls
 +
...
  
==Useful Packages==
+
=== Useful xl command examples ===
* Virtual Machine Manager - a desktop user interface for managing virtual machines: [http://www.archlinux.org/packages/community/any/virt-manager/ virt-manager]
+
# xl top
* Open source multiplatform clone of XenCenter frontend (svn version): [https://aur.archlinux.org/packages.php?ID=45800 openxencenter-svn]
+
# xl list
* Xen Cloud Platform frontend: [https://aur.archlinux.org/packages.php?ID=36458 xvp]
+
# xl shutdown pv-squeeze
 +
# xl destroy pv-squeeze
  
 
==Resources==
 
==Resources==
 
* [http://www.xen.org/ Xen's homepage]
 
* [http://www.xen.org/ Xen's homepage]
 
* [http://wiki.xen.org/wiki/Main_Page The Xen Wiki]
 
* [http://wiki.xen.org/wiki/Main_Page The Xen Wiki]
* [http://code.google.com/p/gentoo-xen-kernel/ Xen kernel patches]
 
* [http://www.virtuatopia.com/index.php/Virtualizing_Windows_Server_2008_with_Xen Virtuatopia guide to get Windows Server 2008 working with Xen]
 

Revision as of 22:32, 22 October 2012

This document explains how to setup Xen within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.

Types of Virtualization Available with Xen

Paravirtual (PV)

Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster as they do not have to run in emulated hardware.

Hardware Virtual (HVM)

For hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.

Paravirtual on Hardware (PV on HM)

There is a third mode which runs Xen on top of a HardwareVirtual guest.

Recommended Practices

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

Obtaining Xen

Xen is currently unmaintained, so for the moment it must be built from source.

Building and Installing Xen Hypervisor and Dom0 Host from Source

Xen recommends that a Xen host (dom0) is 64-bit, guests may be either 32-bit or 64-bit. To build such a system requires a mixed 64/32-bit installation and packages from the the Community repository; the host uses a network bridge and a modified entry in the bootloader configuration file (for example, grub.cfg). These notes assume an installation using systemd is in use, as is the default for a new installation of Arch. For these reasons, you may prefer to make a fresh installation of Arch on which to build and install Xen.

Building Xen

The build process installs additional source from git, so a working internet connection is required.

Edit /etc/pacman.conf to uncomment entries under repositries for multilib and community (three lines each). Prepare for and perform a full system upgrade (pacman -Syu). Install packages listed under 'Required packages for build'. Download Xen Hypervisor 4.2 tarball from http://xen.org/products/downloads.html. Unpack the tarball to a suitable location (tar xjf <path/to/tarball> location). The Xen documentation recommends building Xen as root.

# cd xen-4.2.0
# PYTHON=/usr/bin/python2
# export PYTHON
# ./configure
# make world

# cd dist
# chmod -R -s install/
# rm install/etc/init.d/xend
# mv install/etc/init.d install/etc/conf.d

If installing to another Arch installation:

install packages listed under 'Packages required for host'
# cd ..
# tar cjf ~/xen-dist-4.2.bz2 dist/

copy tarball to required installation, boot into it and use 'tar xjf xen-dist-4.2.bz2 .' to unpack

# cd dist

install packages listed under 'Required packages for Xen host

# ./install.sh

Create a systemd service file

Create a file /etc/systemd/systemd/system/xencommons.service containing:

[Unit]
Description Xen startup script
[Service]
ExecStart=/etc/conf.d/xencommons
[Install]
WantedBy=multi-user.target
# systemctl enable xencommons.service

set a menuentry as described in Bootloader Configuration

reboot into the new Xen system and check all is well:

# xl list

Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0  1024     2     r-----       6.1

Required packages for building Xen

base-devel zlib lzo2 python2 ncurses openssl libx11 yajl 
libaio glib2 base-devel bridge-utils iproute gettext
dev86 bin86 iasl markdown git wget

optional packages:  ocaml ocaml-findlib

Required packages for Xen host

bridge-utils lzo2 bluez vde2 sdl libaio

Bootloader Configuration

The menuentry for a Xen system starts a Xen kernel before starting the main host's kernel.

grub2

Example non-xen menuentry for LVM

menuentry 'Arch ' {
  insmod part_gpt
  insmod lvm
  insmod ext2
  set root='lvm/vg0-arch'
  linux /boot/vmlinuz-linux root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
  initrd /boot/initramfs-linux.img
}

The menuentry to boot the same arch system after Xen has been installed

menuentry 'Arch Xen 4.2' {
  insmod lvm
  insmod part_gpt
  insmod ext2
  set root='(lvm/vg0-arch)'
  multiboot       /boot/xen.gz placeholder dom0_mem=1024M
  module  /boot/vmlinuz-linux placeholder root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
  module  /boot/initramfs-linux.img
}

Example for a physical partition

Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=1024M
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sdaY ro
    module /boot/initramfs-linux-xen-dom0.img
}

Set up Networking

Network Bridge

Xen expects a bridge connection named xenbr0 to have been configured. Using dhcp throughout simplifiesthings while we get everything working.

# cd /etc/network.d
# cp examples/bridge xen-bridge

make the following changes to xen-bridge:

INTERFACE="xenbr0"
BRIDGE_INTERFACE="eth0"
DESCRIPTION="Xen bridge connection"

assuming your existing eth0 connection is called eth0-dhcp edit /etc/conf.d/netconfig

NETWORKS=(eth0-dhcp xen-bridge)

restart the network:

systemctl restart netcfg.service

when the prompt returns, check all is well:

ip addr show
brctl show

Creating Guest Domains (domU)

Creating Paravirtualized (PV) Guests

The general procedure is: perform a normal or minimal installation of the distro that will become a guest copy its kernel/initrd to a directory on the host modify its /etc/fstab to use the virtual disk create a config file for xl

Example for Debian squeeze

Install Debian 6.0 (do not bother with graphical interface, install as little as possible) n.b. Squeeze has softlinks (vmlinuz and initrd.img) in its root directory to the current kernel, so check you have copied a real kernel, and not just a link!

# mkdir /tmp/squeeze
# mkdir -p /var/lib/xen/images/squeeze
# mount -text4 /path/to/squeeze /tmp/squeeze/
# cp /tmp/squeeze/vmlinuz /tmp/squeeze/initrd.img /var/lib/xen/images/squeeze

edit /tmp/squeeze/etc/fstab change its root entry to begin with /dev/xvda1

# /dev/xvda1 / ext4 noatime,nodiratime,errors=remount-ro 0 1
# cp /etc/xen/xlexample.pvlinux /etc/xen/pv-squeeze.cfg

edit /etc/xen/pv-squeeze.cfg with the following changes

kernel=/var/lib/xen/images/squeeze/vmlinuz
ramdisk=/var/lib/xen/images/squeeze/initrd.img
disk = [ '/dev/path-to-your-squeeze-volume-or-partition,raw,xvda1,rw' ]

Special requirements for the Arch kernel

The default Arch initramfs images lack essential xen modules. In the guest install, we need to add the following to mkinitcpio.conf

MODULES = "xen-blkfront xen-fbfront xen-netfront xen-kbdfront"

and then rebuild initramfs-linux.img.

# mkinitcpio -p linux

Running a Guest

Using the Debian Squeeze example, start the guest domU and a console

# xl create /etc/xen/pv-squeeze.cfg

Check all is well:

# xl list

Name            ID   Mem VCPUs	    State	Time(s)
Domain-0         0  1024     2     r-----      26.2
pv-squeeze       1   123     2     -b----       1.5

Start a console:

# xl console /etc/xen/pv-squeeze.cfg

( example output)
[
   0.000000] Initializing cgroup subsys cpuset
[    0.000000] Initializing cgroup subsys cpu
[    0.000000] Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-46) (dannf@debian.org) (gcc version 4.3.5 (Debian 4.3.5-4) ) #1 SMP Sun Sep 23 13:49:30 UTC 2012
[    0.000000] Command line: root=/dev/xvda1
[    0.000000] KERNEL supported cpus:
[    0.000000]   Intel GenuineIntel
[    0.000000]   AMD AuthenticAMD
[    0.000000]   Centaur CentaurHauls
...

Useful xl command examples

# xl top
# xl list
# xl shutdown pv-squeeze
# xl destroy pv-squeeze

Resources