Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
(Enabling Xen under Systemd: added reason for 'null' instead of 'Domain-0' when using 'xl list')
m (UEFI: spelling correction: s/accoring/according)
 
(141 intermediate revisions by 31 users not shown)
Line 1: Line 1:
 
+
[[Category:Hypervisors]]
[[Category:Virtualization]]
+
 
[[Category:Kernel]]
 
[[Category:Kernel]]
 
[[de:Xen]]
 
[[de:Xen]]
 
[[es:Xen]]
 
[[es:Xen]]
 +
[[ja:Xen]]
 
[[ru:Xen]]
 
[[ru:Xen]]
This document explains how to setup Xen 4.2 in Arch. It uses the new oxenstored / xl toolstack (replaces the xend / xm toolstack which was deprecated in Xen 4.1).
+
[[zh-CN:Xen]]
 +
{{Related articles start}}
 +
{{Related|:Category:Hypervisors}}
 +
{{Related|Moving an existing install into (or out of) a virtual machine}}
 +
{{Related articles end}}
  
==What is Xen?==
+
From [http://wiki.xen.org/wiki/Xen_Overview Xen Overview]:
According to the Xen development team:
+
:"''The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.''"
+
  
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomU" domains can be started and controlled from Dom0.
+
:''Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source. Xen is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances.''
  
Xen.org provide a [http://wiki.xen.org/wiki/Xen_Overview full overview]
+
{{Warning|Do not run other virtualization software such as [[VirtualBox]] when running Xen hypervisor, it might hang your system. See this [https://www.virtualbox.org/ticket/12146 bug report (wontfix)].}}
  
==Types of Virtualization Available with Xen==
+
== Introduction ==
===Paravirtual (PV)===
+
Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster as they do not have to run in emulated hardware.
+
===Hardware Virtual (HVM)===
+
For hardware virtualization in our domUs, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:
+
grep -E "(vmx|svm)" --color=always /proc/cpuinfo
+
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.
+
  
=== Paravirtual on Hardware (PV on HM) ===
+
The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the [http://wiki.xen.org/wiki/Dom0 dom0] (short for "domain 0", sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the ''dom0'' has started, one or more [http://wiki.xen.org/wiki/DomU domU] (short for user domains, sometimes called VMs or guests) can be started and controlled from the ''dom0''. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) ''domU''. See [http://wiki.xen.org/wiki/Xen_Overview Xen.org] for a full overview.
There is a third mode which runs Xen on top of a HardwareVirtual guest.
+
  
=== Recommended Practices ===
+
== System requirements ==
The [http://wiki.xen.org/wiki/Main_Page current xen.org wiki] has a section regarding best practices for running Xen. It includes information on allocating a fixed amount of memory dom0 and how to dedicate (pin) a CPU core for its own use.
+
The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the {{Pkg|linux}} and {{Pkg|linux-lts}} Arch kernel packages. To run HVM ''domU'', the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:
 +
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
 +
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM ''domU'' (or you are already running the Xen hypervisor). If you believe the CPU supports one of these features you should access the host system's BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the ''domU'' even in the absence of ''dom0'' support for the device. In order to use PCI passthrough, the CPU must support IOMMU/VT-d.
  
== Obtaining Xen ==
+
== Configuring dom0 ==
Xen is available from [https://aur.archlinux.org/packages.php?ID=14640 AUR], it provides the means to create a Xen host. The Xen host maintains the tools and configuration files for creating and controlling Xen guests.
+
The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypervisor, the host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a [[Desktop environment]] or even [[Xorg]]. If you are building a new host from scratch, see the [[Installation guide]] for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working ''dom0'' running on top of the Xen hypervisor:
  
=== Before installation ===
+
# Installation of the Xen hypervisor
Like all AUR packages, the binary is built on your machine. For Xen, an internet connection is needed during its compilation because further source files are downloaded during the process. Xen.org recommend a host to be 64-bit. This requires the 'multilib' repository to be enabled in ''etc/pacman.conf''.
+
# Modification of the bootloader to boot the Xen hypervisor
 +
# Creation of a network bridge
 +
# Installation of Xen systemd services
  
To build the package you will need the following:
+
=== Installation of the Xen hypervisor ===
 +
To install the Xen hypervisor install either the current stable {{AUR|xen}} or the bleeding edge unstable {{AUR|xen-git}}{{Broken package link|{{aur-mirror|xen-git}}}} packages available in the [[Arch User Repository]]. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The [[multilib]] repository needs to be enabled and the {{Grp|multilib-devel}} package group installed to compile Xen. Install the {{AUR|xen-docs}} package from the [[Arch User Repository]] for the man pages and documentation.
  
base-devel zlib lzo2 python2 ncurses openssl libx11 yajl
+
==== With UEFI support ====
libaio glib2 base-devel bridge-utils iproute gettext
+
It's possible to boot the Xen hypervisor though the bare UEFI system on a modern computer but requires you to first recompile binutils to add support for x86_64-pep emulation. Using the archway of doing things you would use the [[Arch Build System]] and add {{ic|1=--enable-targets=x86_64-pep}} to the build options of the binutils PKGBUILD file:
  dev86 bin86 iasl markdown git wget
+
  --disable-werror --disable-gdb '''--enable-targets=x86_64-pep'''
+
optional packages:  ocaml ocaml-findlib
+
  
You will need to enable the 'extra' repository to get bin86.
+
{{Note|1=
 +
{{Accuracy|This Note is not very meaningful without a link to a bug report.}}
  
=== After installation ===
+
This will not work on the newest version of binutils you will need to downgrade to an older version from the svn:
The following steps are required after installation , most consist of adding a line or two to a configuration file, or commenting out others. All of these requirements are fully covered in this document.
+
{{bc|<nowiki>
 +
$ svn checkout --depth empty svn://svn.archlinux.org/packages
 +
$ cd packages
 +
$ svn update -r 215066 binutils
 +
</nowiki>}}
  
'''The dom0 host requires'''
+
Then compile and install. See [https://nims11.wordpress.com/2013/02/17/downgrading-packages-in-arch-linux-the-worst-case-scenario/] for details of the procedure.  
* systemd services to be started at boot time and additional configuration for
+
}}
* a xenfs filesystem mount point,
+
* an entry in the bootloader configuration file
+
* additional networking configuration
+
* a configuration file for each guest
+
* a means of accessing each guest's kernel and initrd image.
+
  
'''Each domU guest needs'''
+
The next time binutils gets updated on your system it will be overwritten with the official version again. However, you only need this change to (re-)compile the UEFI aware Xen hypervisor, it is not needed at either boot or run time.
* to be installed (refer to the distro's installation guide)
+
  
'''Each paravirtualized (pv) guests needs'''
+
Now when you compile Xen with your x86_64-pep aware binutils a UEFI kernel will be built and installed by default. It is located at {{ic|/usr/lib/efi/xen-?.?.?.efi}} where "?" represent the version digits. The other files you find that also begin with "xen" are simply symlinks back to the real file and can be ignored. However, the efi-binary needs to be manually copied to {{ic|/boot}}, e.g.:
* a mount point corresponding to the virtual disk (specified in its ''etc/fstab'' or equivalent)
+
* tty1 replaced by a virtual console (specified in its ''etc/inittab'' or equivalant, or systemd service file).
+
  
To speed the introduction of 4.2, the maintainer during Xen 4.1 stepped aside; there are significant changes between 4.1 and 4.2, coupled with the transition of Arch from rc.d to systemd. It may take a short time for the new package to settle out so, for the moment, a section on building Xen from source is provided near the end.
+
# cp /usr/lib/efi/xen-4.4.0.efi /boot
  
== Bootloader Configuration ==
+
=== Modification of the bootloader ===
Xen requires that you boot a special xen kernel (xen.gz) which in turn boots your system's normal kernel. A new bootloader entry is needed. See [[#Bootloader Configuration]].
+
{{Expansion|Lots of other boot loaders could/should be covered, at least the most common like [[systemd-boot]].}}
The menuentry for a Xen system starts a Xen kernel before starting the main host's kernel.
+
  
=== grub2 ===
+
{{Warning|Never assume your system will boot after changes to the boot system. This might be the most common error new as well as old users do. Make sure you have a alternative way to boot your system like a USB stick or other livemedia '''BEFORE''' you make changes to your boot system.}}
To boot into the Xen system, we need a new menuentry in grub.cfg.
+
+
Example non-xen menuentry for LVM with gpt partition table
+
menuentry 'Arch ' {
+
  insmod part_gpt
+
  insmod lvm
+
  insmod ext2
+
  set root='lvm/vg0-arch'
+
  linux /boot/vmlinuz-linux root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
+
  initrd /boot/initramfs-linux.img
+
}
+
  
The menuentry to boot the same arch system after Xen has been installed. Get the UUID for ''lvm/vg0-arch'' by using ''blkid''.
+
The boot loader must be modified to load a special Xen kernel ({{ic|xen.gz}} or in the case of UEFI {{ic|xen.efi}}) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.
  
menuentry 'Arch Xen 4.2' {
+
==== UEFI ====
  insmod lvm
+
There are several ways UEFI can be involved in booting Xen but this section will cover the most simple way to get Xen to boot with help of EFI-stub.
  insmod part_gpt
+
  insmod ext2
+
  set root='lvm/vg0-arch'
+
  search --no-floppy --fs-uuid --set=root 346de8aa-6150-4d7b-a8c2-1c43f5929f99
+
  multiboot /boot/xen.gz placeholder dom0_mem=1024M
+
  module /boot/vmlinuz-linux placeholder root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
+
  module  /boot/initramfs-linux.img
+
}
+
  
Example for a physical partition
+
Make sure that you have compiled Xen with UEFI support enabled according to [[#With UEFI support]].  
Arch Linux(XEN)
+
menuentry "Arch Linux(XEN)" {
+
    set root=(hd0,X)
+
  search --no-floppy --fs-uuid --set=root 346de8aa-6150-4d7b-a8c2-1c43f5929f99
+
    multiboot /boot/xen.gz dom0_mem=1024M
+
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sda ro
+
    module /boot/initramfs-linux-xen-dom0.img
+
}
+
More at [https://wiki.archlinux.org/index.php/Grub2 Grub2]
+
  
== Network Configuration ==
+
It is possible to boot a kernel from UEFI just by placing it on the EFI partition, but since Xen at least needs to know what kernel should be booted as dom0, a minimum configuration file is required. Create or edit a {{ic|/boot/xen.cfg}} file according to system requirements, for example:
Previous versions of Xen provided a bridge connection whereas Xen 4.2 requires that network communications between the guest, the host (and beyond) is set up separately. Using dhcp throughout simplifies things while we get everything working, the guest. When  fully working, the guest will normally benefit from a static network address.
+
  
Netcfg greatly simplifies network configuration and is now included as standard in the ''base'' package. Example configuration files are provided in ''etc/network.d/examples'' and Xen 4.2 provides scripts for various networking configurations in ''/etc/xen/scripts''.
+
{{hc|/boot/xen.cfg|<nowiki>
 +
[global]
 +
default=xen
  
=== Network Bridge ===
+
[xen]
By default, Xen expects a bridge connection to be named xenbr0.  
+
options=console=vga loglvl=all noreboot
 +
kernel=vmlinuz-linux root=/dev/sda2 rw ignore_loglevel #earlyprintk=xen
 +
ramdisk=initramfs-linux.img
 +
</nowiki>}}
  
# cd /etc/network.d
+
It might be necessary to use [[UEFI#efibootmgr|efibootmgr]] to set boot order and other parameters. If booting fails, drop to the build-in [[UEFI#Launching UEFI Shell|UEFI shell]] and try to launch manually. For example:
  # cp examples/bridge xenbridge-dhcp
+
  Shell> fs0:
 +
FS0:\> xen-4.4.0.efi
  
make the following changes to xen-bridge:
+
==== GRUB ====
  INTERFACE="xenbr0"
+
For [[GRUB]] users, the Xen package provides the {{ic|/etc/grub.d/09_xen}} generator file. The file {{ic|/etc/xen/grub.conf}} can be edited to customize the Xen boot commands. For example, to allocate 512 MiB of RAM to ''dom0'' at boot, modify {{ic|/etc/xen/grub.conf}} by replacing the line:
BRIDGE_INTERFACE="eth0"
+
  #XEN_HYPERVISOR_CMDLINE="xsave=1"
DESCRIPTION="Xen bridge connection"
+
  
assuming your existing eth0 connection is called eth0-dhcp,
+
with
edit /etc/conf.d/netcfg
+
  XEN_HYPERVISOR_CMDLINE="dom0_mem=512M xsave=1"
  NETWORKS=(eth0-dhcp xenbridge-dhcp)
+
  
restart the network:
+
After customizing the options, update the bootloader configuration with the following command:
  systemctl restart netcfg.service
+
  # grub-mkconfig -o /boot/grub/grub.cfg
  
when the prompt returns, check all is well
+
More information on using the GRUB bootloader is available at [[GRUB]].
ip addr show
+
brctl show
+
+
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
+
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+
    inet 127.0.0.1/8 scope host lo
+
    inet6 ::1/128 scope host
+
      valid_lft forever preferred_lft forever
+
3: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
+
    link/ether 00:1a:92:06:c0:c0 brd ff:ff:ff:ff:ff:ff
+
    inet 192.168.1.3/24 brd 192.168.1.255 scope global xenbr0
+
    inet6 fe80::21a:92ff:fe06:c0c0/64 scope link
+
      valid_lft forever preferred_lft forever
+
+
bridge name bridge id STP enabled interfaces
+
xenbr0 8000.001a9206c0c0 no eth0
+
  
== Final preparations to start Xen at boot ==
+
==== Syslinux ====
Include in your ''/etc/fstab''
+
For [[syslinux]] users, add a stanza like this to your {{ic|/boot/syslinux/syslinux.cfg}}:
  none /proc/xen xenfs defaults 0 0
+
LABEL xen
 +
    MENU LABEL Xen
 +
    KERNEL mboot.c32
 +
    APPEND ../xen-X.Y.Z.gz --- ../vmlinuz-linux console=tty0 root=/dev/sdaX ro --- ../initramfs-linux.img
  
Issue the following so that the services are started at bootup:
+
where {{ic|X.Y.Z}} is your xen version and {{ic|/dev/sdaX}} is your [[fstab#Identifying_filesystems|root partition]].
# systemctl enable xenstored.service
+
# systemctl enable xenconsoled.service
+
# systemctl enable xendomains.service
+
  
'''Reboot into your new Arch Dom0.'''
+
This also requires {{ic|mboot.c32}} to be in the same directory as {{ic|syslinux.cfg}}. If you do not have {{ic|mboot.c32}} in {{ic|/boot/syslinux}}, copy it from:
 +
# cp /usr/lib/syslinux/bios/mboot.c32 /boot/syslinux
  
== Creating guest domains (domU) ==
+
=== Creation of a network bridge ===
+
=== Creating Paravirtualized (PV) Guests===
+
The general procedure is:
+
perform a normal or minimal installation of the distro that will become a guest; copy its kernel/initrd to a directory on the host; modify its /etc/fstab to use the virtual disk; modify its the way it sets up a terminal (getty); create a config file for xl.
+
  
== Running guest Domains ==
+
Xen requires that network communications between ''domU'' and the ''dom0'' (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the [http://wiki.xen.org/wiki/Xen_Networking Networking] article on the Xen wiki for details and {{ic|/etc/xen/scripts}} for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in ''dom0'' that every ''domU'' is attached to, can be set up by creating a [[network bridge]] with the expected name {{ic|xenbr0}}.
Start the guest domU and a console
+
# xl create /etc/xen/pv-squeeze.cfg
+
  
Check all is well:
+
See [[Network bridge#Creating a bridge]] for details.
# xl list
+
  
=== Useful xl command examples ===
+
=== Creating bridge with Network Manager ===
# xl top
+
# xl list
+
# xl shutdown pv-squeeze
+
# xl destroy pv-squeeze
+
  
== Worked example for Debian Squeeze as a guest ==
+
{{Merge|Network_bridge#With_NetworkManager|Duplicates the main page.}}
Install Debian 6.0 (do not bother with graphical interface, install as little as possible). Having installed it, boot into in your new Arch Xen system and mount it.
+
  
The example has the guest Debian Squeeze installed onto /dev/vg0/pv_squeeze
+
Gnome's Network Manager can sometime be troublesome. If following the bridge creation section outlined in the [[Network_bridge|bridges]] section of the wiki are unclear or do not work, then the following steps may work.
  
# mkdir /tmp/squeeze
+
Open the Network Settings and disable the interface you wish to use in your bridge (ex enp5s0). Edit the setting to off and uncheck "connect automatically."
# mkdir -p /var/lib/xen/images/squeeze
+
# mount -text4 /dev/vg0/pv_squeeze /tmp/squeeze/
+
  
Copy the kernel and initrd to a location available Xen. n.b. Squeeze has softlinks (vmlinuz and initrd.img) in its root directory to the current kernel, so check you have copied a real kernel, and not just a link!
+
Create a new bridge connection profile by clicking on the "+" symbol in the bottom left of the network settings. Optionally, run:
  # cp /tmp/squeeze/vmlinuz /tmp/squeeze/initrd.img /var/lib/xen/images/squeeze
+
  # nm-connection-editor
  
edit /tmp/squeeze/etc/fstab
+
to bring up the window immediately. Once the window opens, select Bridge.
change its root entry to begin with /dev/xvda1
+
# /dev/xvda1 / ext4 noatime,nodiratime,errors=remount-ro 0 1
+
  
Debian Squeeze uses ''/etc/inittab'' to configure its terminals. Other distros use other mechanisms. We need to replace the creation of terminals ''tty1'', ''tty2'' etc. with a single ''hvc0''.
+
Click "Add" next to the "Bridged Connections" and select the interface you wished to use in your bridge (ex. Ethernet). Select the device mac address that corresponds to the interface you intend to use and save the settings
  
Comment out any ''getty tty'' lines like these:
+
If your bridge is going to receive an IP address via DHCP, leave the IPv4/IPv6 sections as they are. If DHCP is not running for this particular connection, make sure to give your bridge an IP address. Needless to say, all connections will fail if an IP address is not assigned to the bridge. If you forget to add the IP address when you first create the bridge, it can always be edited later.
1:2345:respawn:/sbin/getty 38400 tty1
+
2:23:respawn:/sbin/getty 38400 tty2
+
+
Replace with the single line
+
hvc:2345:respawn:/sbin/getty 38400 hvc0
+
  
Create a guest configuration file by copying one of the given example files and editing as follows:
+
Now, as root, run:  
  # cp /etc/xen/xlexample.pvlinux /etc/xen/pv-squeeze.cfg
+
  # nmcli con show
  
edit /etc/xen/pv-squeeze.cfg with the following:
+
You should see a connection that matches the name of the bridge you just created. Highlight and copy the UUID on that connection, and then run (again as root):
name = "squeeze.pvlinux"
+
  # nmcli con up <UUID OF CONNECTION>
kernel=/var/lib/xen/images/squeeze/vmlinuz
+
ramdisk=/var/lib/xen/images/squeeze/initrd.img
+
extra = "root=/dev/xvda1"
+
  memory = 256
+
vcpus = 2
+
disk = [ '/dev/vg0/pv_squeeze,raw,xvda1,rw' ]
+
  
Start the guest domU and a console
+
A new connection should appear under the network settings. It may take 30 seconds to a minute. To confirm that it is up and running, run:
  # xl create /etc/xen/pv-squeeze.cfg
+
  # brctl show
  
Check all is well:
+
to show a list of active bridges.
# xl list
+
+
Name            ID  Mem VCPUs    State Time(s)
+
Domain-0        0  1024    2    r-----      26.2
+
squeeze.pvlinux  1    123    2    -b----      1.5
+
  
Start a console:
+
Reboot. If everything works properly after a reboot (ie. bridge starts automatically), then you are all set.
# xl console squeeze.pvlinux
+
+
( example output)
+
[
+
    0.000000] Initializing cgroup subsys cpuset
+
[    0.000000] Initializing cgroup subsys cpu
+
[    0.000000] Linux version 2.6.32-5-xen-amd64 (Debian 2.6.32-46) ...UTC 2012
+
[    0.000000] Command line: root=/dev/xvda1
+
[    0.000000] KERNEL supported cpus:
+
[    0.000000]  Intel GenuineIntel
+
[    0.000000]  AMD AuthenticAMD
+
[    0.000000]  Centaur CentaurHauls
+
...
+
  
== Installation notes for domU Paravirtualized guests ==
+
<optional> In your network settings, remove the connection profile on your bridge interface that does NOT connect to the bridge. This just keeps things from being confusing later on.
=== Arch ===
+
The default Arch initramfs images lack essential xen modules.
+
In the guest install, we need to add the following to mkinitcpio.conf
+
MODULES = "xen-blkfront xen-fbfront xen-netfront xen-kbdfront"
+
  
and then rebuild its initramfs-linux.img with ''mkinitcpio -p linux''.
+
== Installation of Xen systemd services ==
 +
The Xen ''dom0'' requires the {{ic|xenstored}}, {{ic|xenconsoled}}, {{ic|xendomains}} and {{ic|xen-init-dom0}} [[systemd#Using units|services]] to be started and possibly enabled.
  
=== Debian ===
+
== Confirming successful installation ==
Installation for Wheezy (testing) is identical to that for Squeeze (stable), see [[#Worked example for Debian Squeeze as a guest]].
+
Reboot your ''dom0'' host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up ''dom0'' should report the following when you run {{ic|xl list}} as root:
 +
{{hc|# xl list|
 +
Name                                        ID  Mem VCPUs State Time(s)
 +
Domain-0                                    0  511    2    r-----  41652.9}}
 +
Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that ''dom0'' is listed.
  
== Common Errors ==
+
In addition to the required steps above, see [http://wiki.xen.org/wiki/Xen_Best_Practices best practices for running Xen] which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for ''dom0'' use. It also may be beneficial to create a xenfs filesystem mount point by including in {{ic|/etc/fstab}}
* 'xl list' complains about libxl
+
none /proc/xen xenfs defaults 0 0
- Either you have not booted into the Xen system, or xen modules listed in ''xencommons'' script are not installed
+
  
* ''xl create'' fails
+
== Using Xen ==
- check the guest's kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using ''initrd'' instead of ''ramdisk'')
+
Xen supports both paravirtualized (PV) and hardware virtualized (HVM) ''domU''. In the following sections the steps for creating HVM and PV ''domU'' running Arch Linux are described. In general, the steps for creating an HVM ''domU'' are independent of the ''domU'' OS and HVM ''domU'' support a wide range of operating systems including Microsoft Windows. To use HVM ''domU'' the ''dom0'' hardware must have virtualization support. Paravirtualized ''domU'' do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the [http://wiki.xen.org/wiki/Category:Guest_Install Guest Install] page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV ''domU''. In general, HVM ''domU'' often run slower than PV ''domU'' since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM ''domU'', the processes are substantially different. In both cases, for each ''domU'', a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each ''domU'' will need access to a copy of the installation ISO stored on the ''dom0'' (see the [https://www.archlinux.org/download/ Download Page] to obtain the Arch Linux ISO).
  
* Arch linux guest hangs with a ctrl-d message
+
=== Create a domU "hard disk" ===
- press ctrl-d until you get back to a prompt, rebuild its initramfs described
+
Xen supports a number of different types of "hard disks" including [[LVM|Logical Volumes]], [[Partitioning|raw partitions]], and image files. To create a [[Wikipedia: Sparse file|sparse file]], that will grow to a maximum of 10GiB, called {{ic|domU.img}}, use:
 +
$ truncate -s 10G domU.img
 +
If file IO speed is of greater importance than domain portability, using [[LVM|Logical Volumes]] or [[Partitioning|raw partitions]] may be a better choice.
  
* The domu guest hangs at 'crond'
+
Xen may present any partition / disk available to the host machine to a domain as either a partition or disk. This means that, for example, an LVM partition on the host can appear as a hard drive (and hold multiple partitions) to a domain. Note that making sub-partitons on a partition will make accessing those partitions on the host machine more difficult. See the kpartx man page for information on how to map out partitions within a partition.
- The guest's terminal needs to be set to ''hvc0'' instead of ''tty1'' See the Debian Squeeze example above.
+
  
* Error message "''failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory''"
+
=== Create a domU configuration ===
- caused by ''/etc/udev/rules.d/xend.rules''; xend is (a) deprecated and (b) not used, so it is safe to remove xend.rules
+
Each ''domU'' requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the [http://wiki.xenproject.org/wiki/XenConfigurationFileOptions Xen Wiki] or the {{ic|xl.cfg}} man page. Both HVM and PV ''domU'' share some components of the configuration file. These include
  
== Building and Installing Xen Hypervisor and Dom0 Host from Source ==
+
name = "domU"
Xen recommends that a Xen host (dom0) is 64-bit, guests may be either 32-bit or 64-bit. To build such a system requires a mixed 64/32-bit installation and packages from the the Community repository; the host uses a network bridge and a modified entry in the bootloader configuration file (for example, grub.cfg). These notes assume an installation using systemd is in use, as is the default for a new installation of Arch. For these reasons, you may prefer to make a fresh installation of Arch on which to build and install Xen.
+
memory = 256
 +
disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ]
 +
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
  
===Building Xen===
+
The {{ic|1=name=}} is the name by which the xl tools manage the ''domU'' and needs to be unique across all ''domU''. The {{ic|1=disk=}} includes information about both the the installation media ({{ic|file:}}) and the partition created for the ''domU'' {{ic|phy}}. If an image file is being used instead of a physical partition, the {{ic|phy:}} needs to be changed to {{ic|file:}}. The {{ic|1=vif=}} defines a network controller. The {{ic|00:16:3e}} MAC block is reserved for Xen domains, so the last three digits of the {{ic|1=mac=}} must be randomly filled in (hex values 0-9 and a-f only).
Building and installing Xen significantly modifies your system. Xen is an established program, but Xen 4.2 is extremely new. Consider Xen 4.2 on an Arch system to be untested. Consider yourself to be an alpha tester, perhaps make a throw-away Arch system for the Xen installation.
+
  
'''It is best practise to backup and highly recommended to make a fresh installation of Arch on which to build and install Xen.'''
+
=== Managing a domU ===
 +
If a ''domU'' should be started on boot, create a symlink to the configuration file in {{ic|/etc/xen/auto}} and ensure the {{ic|xendomains}} service is set up correctly. Some useful commands for managing ''domU'' are:
 +
# xl top
 +
# xl list
 +
# xl console domUname
 +
# xl shutdown domUname
 +
# xl destroy domUname
  
* The build process installs additional source from git, so a working internet connection is required.
+
== Configuring a hardware virtualized (HVM) Arch domU ==
* systemd service files will be available soon. Until then we use (the currently still supported, but legacy) rc.d and rc.conf.
+
In order to use HVM ''domU'' install the {{Pkg|mesa-libgl}} and {{Pkg|bluez-libs}} packages.
  
Edit /etc/pacman.conf to uncomment entries under repositries for multilib and community (three lines each).
+
A minimal configuration file for a HVM Arch ''domU'' is:
Prepare for and perform a full system upgrade (pacman -Syu).
+
Install packages listed under 'Obtaining Xen'.
+
Download Xen Hypervisor 4.2 tarball from http://xen.org/products/downloads.html.
+
Unpack the tarball to a suitable location (tar xjf <path/to/tarball> location). Unfortunately, the build process also creates binaries and scripts for the old, deprecated ''xend/xm''.
+
The Xen documentation recommends building Xen as root.
+
  
  # cd xen-4.2.0
+
  name = 'HVM_domU'
  # PYTHON=/usr/bin/python2
+
builder = 'hvm'
  # export PYTHON
+
memory = 256
  # ./configure
+
vcpus = 2
  # make world
+
  disk = [ 'phy:/dev/mapper/vg0-hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ]
  # make dist
+
  vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ]
 +
  vnc = 1
 +
  vnclisten = '0.0.0.0'
 +
  vncdisplay = 1
  
The dist directory can be used to install Xen to any machine, but it
+
Since HVM machines do not have a console, they can only be connected to via a [[Vncserver|vncviewer]]. The configuration file allows for unauthenticated remote access of the ''domU'' vncserver and is not suitable for unsecured networks. The vncserver will be available on port {{ic|590X}}, where X is the value of {{ic|vncdisplay}}, of the ''dom0''. The ''domU'' can be created with:
* sets the 'sticky' bit on all file permissions
+
* installs startup scripts to ''etc/init.d'' (equivalent of ''etc/rc.d'')
+
* includes some udev rules for 'xend' which creates LOTS of error messages when booting up (xend is not used, having been replaced by xendomains)
+
  
The only script we need from ''etc/init.d'' is ''xendomains'' since the systemd service files given below replace ''etc/init.d/xencommons''. The service files are based on those in Fedora 17 (which uses systemd and provides Xen 4.1). However, it places ''xendomains'' in ''/usr/libexec'' which is not present in Arch. The ''xendomain.service'' below uses ''/etc/xen/scripts'' as the location for ''xendomains''.
+
# xl create /path/to/config/file
  
Fix these problems with
+
and its status can be checked with
# cd dist
+
# chmod -R -s install/
+
# cp install/etc/init.d/xendomains install/etc/xen/scripts
+
# rm install/etc/init.d/*
+
# rmdir install/etc/init.d
+
# rm install/etc/udev/rules.d/xend.rules
+
  
If installing to another Arch system, make a tarball and copy it over:
+
  # xl list
  # cd ..
+
# tar cjf ~/xen-dist-4.2.bz2 dist/
+
+
copy the tarball to the other installation, boot into it
+
use 'tar xjf xen-dist-4.2.bz2 .' to unpack
+
then install packages listed under 'Packages required for host'
+
+
Now change to the 'dist' directory and install
+
# cd dist
+
  
Whether installing now, or to another installation, from the ''dist'' directory issue:
+
Once the ''domU'' is created, connect to it via the vncserver and install Arch Linux as described in the [[Installation guide]].
# ./install.sh
+
  
=== Enabling Xen under Systemd ===
+
== Configuring a paravirtualized (PV) Arch domU ==
Add the following files
+
A minimal configuration file for a PV Arch ''domU'' is:
 +
name = "PV_domU"
 +
kernel = "/mnt/arch/boot/x86_64/vmlinuz"
 +
ramdisk = "/mnt/arch/boot/x86_64/archiso.img"
 +
extra = "archisobasedir=arch archisolabel=ARCH_201301"
 +
memory = 256
 +
disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]
 +
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
 +
This file needs to tweaked for your specific use. Most importantly, the {{ic|1=archisolabel=ARCH_201301}} line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from {{ic|x86_64}} to {{ic|i686}}.
  
'''/etc/modules/load/xen.conf'''
+
Before creating the ''domU'', the installation ISO must be loop-mounted. To do this, ensure the directory {{ic|/mnt}} exists and is empty, then run the following command (being sure to fill in the correct ISO path):
xen-evtchn
+
  # mount -o loop /path/to/iso /mnt
  xen-gntdev
+
xen-gntalloc
+
xen-blkback
+
xen-netback
+
xen-pciback
+
xen-acpi-processor
+
  
n.b The following were included in xencommons, but were not inserted by systemd
+
Once the ISO is mounted, the ''domU'' can be created with:
''evtchn'' ''gntdev'' ''netbk'' ''blkbk'' ''xen-scsibk'' ''usbbk'' ''pciback'' ''blktap2'' ''blktap''.
+
''xen-acpi-processor'' may not work on some machines. Remove this when getting error.
+
  
'''/usr/lib/systemd/system/xenstored.service'''
+
  # xl create -c /path/to/config/file
  [Unit]
+
Description=Xenstored - daemon managing xenstore file system
+
Before=libvirtd.service libvirt-guests.service
+
After=dbus.service
+
RefuseManualStop=true
+
+
[Service]
+
Type=forking
+
PIDFile=/var/run/xenstored.pid
+
ExecStart=/usr/sbin/xenstored --pid-file /var/run/xenstored.pid $XENSTORED_ARGS
+
+
[Install]
+
WantedBy=multi-user.target
+
  
'''/usr/lib/systemd/system/xenconsoled.service'''
+
The "-c" option will enter the ''domU'''s console when successfully created. Then you can install Arch Linux as described in the [[Installation guide]], but with the following deviations. The block devices listed in the disks line of the cfg file will show up as {{ic|/dev/xvd*}}. Use these devices when partitioning the ''domU''. After installation and before the ''domU'' is rebooted, the {{ic|xen-blkfront}}, {{ic|xen-fbfront}}, {{ic|xen-netfront}}, {{ic|xen-kbdfront}} modules must be added to [[Mkinitcpio]]. Without these modules, the ''domU'' will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a {{ic|grub.cfg}} file: (It may be necessary to create the {{ic|/boot/grub}} directory)
[Unit]
+
{{hc|/boot/grub/grub.cfg|<nowiki>menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {
Description=Xenconsoled - handles logging from guest consoles and hypervisor
+
        insmod gzio
After=xenstored.service
+
        insmod part_msdos
+
        insmod ext2
[Service]
+
        set root='hd0,msdos1'
Type=simple
+
        if [ x$feature_platform_search_hint = xy ]; then
PIDFile=/var/run/xenconsoled.pid
+
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  __UUID__
ExecStart=/usr/sbin/xenconsoled
+
        else
+
          search --no-floppy --fs-uuid --set=root __UUID__
[Install]
+
        fi
WantedBy=multi-user.target
+
        echo    'Loading Linux core repo kernel ...'
+
        linux  /boot/vmlinuz-linux root=UUID=__UUID__ ro
'''/usr/lib/systemd/system/xendomains.service'''
+
        echo    'Loading initial ramdisk ...'
[Unit]
+
        initrd /boot/initramfs-linux.img
Description=Xendomains - start and stop guests on boot and shutdown
+
}</nowiki>}}
Requires=proc-xen.mount xenstored.service
+
This file must be edited to match the UUID of the root partition. From within the ''domU'', run the following command:
After=proc-xen.mount xenstored.service xenconsoled.service
+
  # blkid
ConditionPathExists=/proc/xen
+
Replace all instances of {{ic|__UUID__}} with the real UUID of the root partition (the one that mounts as {{ic|/}}).:
+
  # sed -i 's/__UUID__/12345678-1234-1234-1234-123456789abcd/g' /boot/grub/grub.cfg
[Service]
+
Type=oneshot
+
RemainAfterExit=true
+
  ExecStartPre=/usr/bin/grep -q control_d /proc/xen/capabilities
+
  ExecStart=/etc/xen/scripts/xendomains start
+
  ExecStop=/etc/xen/scripts/xendomains stop
+
+
[Install]
+
WantedBy=multi-user.target
+
  
Issue the following so that the services are started at bootup:
+
Shutdown the ''domU'' with the {{ic|poweroff}} command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:
  # systemctl enable xenstored.service
+
  # umount /mnt
# systemctl enable xenconsoled.service
+
The ''domU'' cfg file should now be edited. Delete the {{ic|1=kernel =}}, {{ic|1=ramdisk =}}, and {{ic|1=extra =}} lines and replace them with the following line:
  # systemctl enable xendomains.service
+
  bootloader = "pygrub"
 +
Also remove the ISO disk from the {{ic|1=disk =}} line.
  
Include in your ''/etc/fstab''
+
The Arch ''domU'' is now set up. It may be started with the same line as before:
  none /proc/xen xenfs defaults 0 0
+
# xl create -c /etc/xen/archdomu.cfg
  
Add a new Xen menuentry in grub.cfg as described earlier and then '''reboot into the new Xen system''' and check all is well:
+
== Common Errors ==
# xl list
+
 
+
=== "xl list" complains about libxl ===
Name                                        ID  Mem VCPUs State Time(s)
+
Either you have not booted into the Xen system, or xen modules listed in {{ic|xencommons}} script are not installed.
Domain-0                                    0  1024    2    r-----      6.1
+
 
 +
=== "xl create" fails ===
 +
Check the guest's kernel is located correctly, check the {{ic|pv-xxx.cfg}} file for spelling mistakes (like using {{ic|initrd}} instead of {{ic|ramdisk}}).
 +
 
 +
=== Arch Linux guest hangs with a ctrl-d message ===
 +
Press {{ic|ctrl-d}} until you get back to a prompt, rebuild its initramfs described
  
n.b. ''xencommons'' sets the name "Domain-0" in the xenstored database. The current systemd service files do not do this, so at the moment ''xl list'' displays "(null)" as the name for Dom0.
+
=== Error message "failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory" ===
 +
This is caused by {{ic|/etc/udev/rules.d/xend.rules}}. Xend is deprecated and not used, so it is safe to remove that file.
  
 
==Resources==
 
==Resources==
 
* [http://www.xen.org/ The homepage at xen.org]
 
* [http://www.xen.org/ The homepage at xen.org]
 
* [http://wiki.xen.org/wiki/Main_Page The wiki at xen.org ]
 
* [http://wiki.xen.org/wiki/Main_Page The wiki at xen.org ]

Latest revision as of 22:01, 10 August 2016

From Xen Overview:

Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source. Xen is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances.
Warning: Do not run other virtualization software such as VirtualBox when running Xen hypervisor, it might hang your system. See this bug report (wontfix).

Introduction

The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the dom0 (short for "domain 0", sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the dom0 has started, one or more domU (short for user domains, sometimes called VMs or guests) can be started and controlled from the dom0. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domU. See Xen.org for a full overview.

System requirements

The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the linux and linux-lts Arch kernel packages. To run HVM domU, the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM domU (or you are already running the Xen hypervisor). If you believe the CPU supports one of these features you should access the host system's BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the domU even in the absence of dom0 support for the device. In order to use PCI passthrough, the CPU must support IOMMU/VT-d.

Configuring dom0

The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypervisor, the host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a Desktop environment or even Xorg. If you are building a new host from scratch, see the Installation guide for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working dom0 running on top of the Xen hypervisor:

  1. Installation of the Xen hypervisor
  2. Modification of the bootloader to boot the Xen hypervisor
  3. Creation of a network bridge
  4. Installation of Xen systemd services

Installation of the Xen hypervisor

To install the Xen hypervisor install either the current stable xenAUR or the bleeding edge unstable xen-gitAUR[broken link: archived in aur-mirror] packages available in the Arch User Repository. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The multilib repository needs to be enabled and the multilib-devel package group installed to compile Xen. Install the xen-docsAUR package from the Arch User Repository for the man pages and documentation.

With UEFI support

It's possible to boot the Xen hypervisor though the bare UEFI system on a modern computer but requires you to first recompile binutils to add support for x86_64-pep emulation. Using the archway of doing things you would use the Arch Build System and add --enable-targets=x86_64-pep to the build options of the binutils PKGBUILD file:

--disable-werror --disable-gdb --enable-targets=x86_64-pep
Note:

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: This Note is not very meaningful without a link to a bug report. (Discuss in Talk:Xen#)

This will not work on the newest version of binutils you will need to downgrade to an older version from the svn:

$ svn checkout --depth empty svn://svn.archlinux.org/packages
$ cd packages
$ svn update -r 215066 binutils
Then compile and install. See [1] for details of the procedure.

The next time binutils gets updated on your system it will be overwritten with the official version again. However, you only need this change to (re-)compile the UEFI aware Xen hypervisor, it is not needed at either boot or run time.

Now when you compile Xen with your x86_64-pep aware binutils a UEFI kernel will be built and installed by default. It is located at /usr/lib/efi/xen-?.?.?.efi where "?" represent the version digits. The other files you find that also begin with "xen" are simply symlinks back to the real file and can be ignored. However, the efi-binary needs to be manually copied to /boot, e.g.:

# cp /usr/lib/efi/xen-4.4.0.efi /boot

Modification of the bootloader

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Lots of other boot loaders could/should be covered, at least the most common like systemd-boot. (Discuss in Talk:Xen#)
Warning: Never assume your system will boot after changes to the boot system. This might be the most common error new as well as old users do. Make sure you have a alternative way to boot your system like a USB stick or other livemedia BEFORE you make changes to your boot system.

The boot loader must be modified to load a special Xen kernel (xen.gz or in the case of UEFI xen.efi) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.

UEFI

There are several ways UEFI can be involved in booting Xen but this section will cover the most simple way to get Xen to boot with help of EFI-stub.

Make sure that you have compiled Xen with UEFI support enabled according to #With UEFI support.

It is possible to boot a kernel from UEFI just by placing it on the EFI partition, but since Xen at least needs to know what kernel should be booted as dom0, a minimum configuration file is required. Create or edit a /boot/xen.cfg file according to system requirements, for example:

/boot/xen.cfg
[global]
default=xen

[xen]
options=console=vga loglvl=all noreboot
kernel=vmlinuz-linux root=/dev/sda2 rw ignore_loglevel #earlyprintk=xen
ramdisk=initramfs-linux.img

It might be necessary to use efibootmgr to set boot order and other parameters. If booting fails, drop to the build-in UEFI shell and try to launch manually. For example:

Shell> fs0:
FS0:\> xen-4.4.0.efi

GRUB

For GRUB users, the Xen package provides the /etc/grub.d/09_xen generator file. The file /etc/xen/grub.conf can be edited to customize the Xen boot commands. For example, to allocate 512 MiB of RAM to dom0 at boot, modify /etc/xen/grub.conf by replacing the line:

#XEN_HYPERVISOR_CMDLINE="xsave=1"

with

XEN_HYPERVISOR_CMDLINE="dom0_mem=512M xsave=1"

After customizing the options, update the bootloader configuration with the following command:

# grub-mkconfig -o /boot/grub/grub.cfg

More information on using the GRUB bootloader is available at GRUB.

Syslinux

For syslinux users, add a stanza like this to your /boot/syslinux/syslinux.cfg:

LABEL xen
    MENU LABEL Xen
    KERNEL mboot.c32
    APPEND ../xen-X.Y.Z.gz --- ../vmlinuz-linux console=tty0 root=/dev/sdaX ro --- ../initramfs-linux.img

where X.Y.Z is your xen version and /dev/sdaX is your root partition.

This also requires mboot.c32 to be in the same directory as syslinux.cfg. If you do not have mboot.c32 in /boot/syslinux, copy it from:

# cp /usr/lib/syslinux/bios/mboot.c32 /boot/syslinux

Creation of a network bridge

Xen requires that network communications between domU and the dom0 (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the Networking article on the Xen wiki for details and /etc/xen/scripts for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in dom0 that every domU is attached to, can be set up by creating a network bridge with the expected name xenbr0.

See Network bridge#Creating a bridge for details.

Creating bridge with Network Manager

Merge-arrows-2.pngThis article or section is a candidate for merging with Network_bridge#With_NetworkManager.Merge-arrows-2.png

Notes: Duplicates the main page. (Discuss in Talk:Xen#)

Gnome's Network Manager can sometime be troublesome. If following the bridge creation section outlined in the bridges section of the wiki are unclear or do not work, then the following steps may work.

Open the Network Settings and disable the interface you wish to use in your bridge (ex enp5s0). Edit the setting to off and uncheck "connect automatically."

Create a new bridge connection profile by clicking on the "+" symbol in the bottom left of the network settings. Optionally, run:

# nm-connection-editor

to bring up the window immediately. Once the window opens, select Bridge.

Click "Add" next to the "Bridged Connections" and select the interface you wished to use in your bridge (ex. Ethernet). Select the device mac address that corresponds to the interface you intend to use and save the settings

If your bridge is going to receive an IP address via DHCP, leave the IPv4/IPv6 sections as they are. If DHCP is not running for this particular connection, make sure to give your bridge an IP address. Needless to say, all connections will fail if an IP address is not assigned to the bridge. If you forget to add the IP address when you first create the bridge, it can always be edited later.

Now, as root, run:

# nmcli con show

You should see a connection that matches the name of the bridge you just created. Highlight and copy the UUID on that connection, and then run (again as root):

# nmcli con up <UUID OF CONNECTION>

A new connection should appear under the network settings. It may take 30 seconds to a minute. To confirm that it is up and running, run:

# brctl show

to show a list of active bridges.

Reboot. If everything works properly after a reboot (ie. bridge starts automatically), then you are all set.

<optional> In your network settings, remove the connection profile on your bridge interface that does NOT connect to the bridge. This just keeps things from being confusing later on.

Installation of Xen systemd services

The Xen dom0 requires the xenstored, xenconsoled, xendomains and xen-init-dom0 services to be started and possibly enabled.

Confirming successful installation

Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report the following when you run xl list as root:

# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0   511     2     r-----   41652.9

Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.

In addition to the required steps above, see best practices for running Xen which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for dom0 use. It also may be beneficial to create a xenfs filesystem mount point by including in /etc/fstab

none /proc/xen xenfs defaults 0 0

Using Xen

Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domU. In the following sections the steps for creating HVM and PV domU running Arch Linux are described. In general, the steps for creating an HVM domU are independent of the domU OS and HVM domU support a wide range of operating systems including Microsoft Windows. To use HVM domU the dom0 hardware must have virtualization support. Paravirtualized domU do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the Guest Install page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV domU. In general, HVM domU often run slower than PV domU since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM domU, the processes are substantially different. In both cases, for each domU, a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each domU will need access to a copy of the installation ISO stored on the dom0 (see the Download Page to obtain the Arch Linux ISO).

Create a domU "hard disk"

Xen supports a number of different types of "hard disks" including Logical Volumes, raw partitions, and image files. To create a sparse file, that will grow to a maximum of 10GiB, called domU.img, use:

$ truncate -s 10G domU.img

If file IO speed is of greater importance than domain portability, using Logical Volumes or raw partitions may be a better choice.

Xen may present any partition / disk available to the host machine to a domain as either a partition or disk. This means that, for example, an LVM partition on the host can appear as a hard drive (and hold multiple partitions) to a domain. Note that making sub-partitons on a partition will make accessing those partitions on the host machine more difficult. See the kpartx man page for information on how to map out partitions within a partition.

Create a domU configuration

Each domU requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the Xen Wiki or the xl.cfg man page. Both HVM and PV domU share some components of the configuration file. These include

name = "domU"
memory = 256
disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ]
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]

The name= is the name by which the xl tools manage the domU and needs to be unique across all domU. The disk= includes information about both the the installation media (file:) and the partition created for the domU phy. If an image file is being used instead of a physical partition, the phy: needs to be changed to file:. The vif= defines a network controller. The 00:16:3e MAC block is reserved for Xen domains, so the last three digits of the mac= must be randomly filled in (hex values 0-9 and a-f only).

Managing a domU

If a domU should be started on boot, create a symlink to the configuration file in /etc/xen/auto and ensure the xendomains service is set up correctly. Some useful commands for managing domU are:

# xl top
# xl list
# xl console domUname
# xl shutdown domUname
# xl destroy domUname

Configuring a hardware virtualized (HVM) Arch domU

In order to use HVM domU install the mesa-libgl and bluez-libs packages.

A minimal configuration file for a HVM Arch domU is:

name = 'HVM_domU'
builder = 'hvm'
memory = 256
vcpus = 2
disk = [ 'phy:/dev/mapper/vg0-hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ]
vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ]
vnc = 1
vnclisten = '0.0.0.0'
vncdisplay = 1

Since HVM machines do not have a console, they can only be connected to via a vncviewer. The configuration file allows for unauthenticated remote access of the domU vncserver and is not suitable for unsecured networks. The vncserver will be available on port 590X, where X is the value of vncdisplay, of the dom0. The domU can be created with:

# xl create /path/to/config/file

and its status can be checked with

# xl list

Once the domU is created, connect to it via the vncserver and install Arch Linux as described in the Installation guide.

Configuring a paravirtualized (PV) Arch domU

A minimal configuration file for a PV Arch domU is:

name = "PV_domU"
kernel = "/mnt/arch/boot/x86_64/vmlinuz"
ramdisk = "/mnt/arch/boot/x86_64/archiso.img"
extra = "archisobasedir=arch archisolabel=ARCH_201301"
memory = 256
disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]

This file needs to tweaked for your specific use. Most importantly, the archisolabel=ARCH_201301 line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from x86_64 to i686.

Before creating the domU, the installation ISO must be loop-mounted. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):

# mount -o loop /path/to/iso /mnt

Once the ISO is mounted, the domU can be created with:

# xl create -c /path/to/config/file

The "-c" option will enter the domU's console when successfully created. Then you can install Arch Linux as described in the Installation guide, but with the following deviations. The block devices listed in the disks line of the cfg file will show up as /dev/xvd*. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the xen-blkfront, xen-fbfront, xen-netfront, xen-kbdfront modules must be added to Mkinitcpio. Without these modules, the domU will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)

/boot/grub/grub.cfg
menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {
        insmod gzio
        insmod part_msdos
        insmod ext2
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  __UUID__
        else
          search --no-floppy --fs-uuid --set=root __UUID__
        fi
        echo    'Loading Linux core repo kernel ...'
        linux   /boot/vmlinuz-linux root=UUID=__UUID__ ro
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initramfs-linux.img
}

This file must be edited to match the UUID of the root partition. From within the domU, run the following command:

# blkid

Replace all instances of __UUID__ with the real UUID of the root partition (the one that mounts as /).:

# sed -i 's/__UUID__/12345678-1234-1234-1234-123456789abcd/g' /boot/grub/grub.cfg

Shutdown the domU with the poweroff command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:

# umount /mnt

The domU cfg file should now be edited. Delete the kernel =, ramdisk =, and extra = lines and replace them with the following line:

bootloader = "pygrub"

Also remove the ISO disk from the disk = line.

The Arch domU is now set up. It may be started with the same line as before:

# xl create -c /etc/xen/archdomu.cfg

Common Errors

"xl list" complains about libxl

Either you have not booted into the Xen system, or xen modules listed in xencommons script are not installed.

"xl create" fails

Check the guest's kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using initrd instead of ramdisk).

Arch Linux guest hangs with a ctrl-d message

Press ctrl-d until you get back to a prompt, rebuild its initramfs described

Error message "failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory"

This is caused by /etc/udev/rules.d/xend.rules. Xend is deprecated and not used, so it is safe to remove that file.

Resources