Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
(fix a spelling error in the paragraph paravirtual)
m (Modification of the bootloader)
(33 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
[[es:Xen]]
 
[[es:Xen]]
 
[[ru:Xen]]
 
[[ru:Xen]]
This document explains how to use Xen 4.2 in Arch. It uses the new oxenstored / xl toolstack (replaces the xend / xm toolstack which was deprecated in Xen 4.1).
+
{{Article summary start}}
 +
{{Article summary text|This article is about basic usage of Xen, including running Arch as both a Xen dom0 ''host'' and as a domU ''guest''.}}
 +
{{Article summary heading|Required software}}
 +
{{Article summary link|Xen|http://www.xen.org/}}
 +
{{Article summary heading|Related}}
 +
{{Article summary wiki|KVM}}
 +
{{Article summary wiki|QEMU}}
 +
{{Article summary wiki|VirtualBox}}
 +
{{Article summary wiki|VMware}}
 +
{{Article summary wiki|Moving an existing install into (or out of) a virtual machine}}
 +
{{Article summary end}}
  
==What is Xen?==
+
From [http://wiki.xen.org/wiki/Xen_Overview Xen Overview]:
According to the Xen development team:
+
:"''The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.''"
+
  
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader of the computer it is installed on, and allows multiple operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "Dom0" (short for "domain 0"), or privileged domain, which in our case runs a Linux kernel (other possible Dom0 operating systems are NetBSD and OpenSolaris). The physical hardware must, of course, be supported by this kernel to run Xen. Once the Dom0 has started, one or more "DomUs" (short for user domains, sometimes called VMs) can be started and controlled from Dom0.  
+
:''Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source. Xen is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances.''
  
Xen.org provides a [http://wiki.xen.org/wiki/Xen_Overview full overview]
+
==Introduction==
  
==Types of Virtualization Available with Xen==
+
The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the "dom0" (short for "domain 0", sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the dom0 has started, one or more "domUs" (short for user domains, sometimes called VMs or guests) can be started and controlled from the dom0. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domUs. See [http://wiki.xen.org/wiki/Xen_Overview Xen.org] for a full overview.
===Paravirtual (PV)===
+
Paravirtualized guests require a kernel with support for Xen built in. This is default for all recent Linux kernels and some other Unix-like systems. Paravirtualized domUs usually run faster than HVM domains as they do not have to run in emulated hardware.  
+
===Hardware Virtual (HVM)===
+
For OSes that do not natively support Xen (e.g. Windows), HVM offers full hardware virtualization. To use HVM in Xen, the host system hardware must include either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command on the host system:
+
grep -E "(vmx|svm)" --color=always /proc/cpuinfo
+
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system's BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, then enable it, boot the system and repeat the above command.
+
  
== Obtaining Xen ==
+
==System requirements==
Xen is available from the AUR. The recommended current stable version is [https://aur.archlinux.org/packages.php?ID=14640 Xen 4.2], and the bleeding edge unstable package can be found [https://aur.archlinux.org/packages/xen-hg-unstable/ here.] Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services.  
+
The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the {{Pkg|linux}} and {{Pkg|linux-lts}} Arch kernel packages. To run run HVM domUs the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:
 +
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
 +
If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM domUs. If you believe the CPU supports one of these features you should access the host system's BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the domU even in the absence of dom0 support for the device. In order to use PCI passthrough the CPU must support IOMMU/VT-d.
  
Xen, unlike certain other virtualization systems, relies on a full install of the base operating system. Before attempting to install Xen, your host machine should have a fully operational and up-to-date install of Arch Linux. If you are building a new host from scratch, see the [[Installation_Guide|Installation Guide]] for instructions on installing Arch Linux.
+
== Configuring dom0 ==
 +
The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypevisor, the  host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a [[Desktop Environment]] or even [[Xorg]]. If you are building a new host from scratch, see the [[Installation Guide]] for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working dom0 running on top of the Xen hypevisor:
  
Like all AUR packages, the Xen binaries are built from source. Note that it is possible (but not necessary) to build the package on a separate machine and transfer the xz package over, assuming that the machines share the same architecture (e.g. x86_64). For Xen, an internet connection is needed during its compilation because further source files are downloaded during the process. Xen.org recommends a host to be 64-bit. This requires the  'multilib' repository to be enabled in ''etc/pacman.conf''.
+
* Installation of the Xen hypervisor
 +
* Modification of the bootloader to boot the Xen hypervisor
 +
* Creation of a network bridge
 +
* Installation of Xen systemd services
  
To build the package you will need the following:
+
=== Installation of the Xen hypervisor ===
 +
To install the Xen hypervisor install either the current stable {{AUR|xen}} or the bleeding edge unstable {{AUR|xen-hg-unstable}} packages available in the [[Arch User Repository]]. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The multilib repository needs to be enabled to install Xen (See [[Pacman#Repositories]] for details). Install the {{AUR|xen-docs}} package from the [[Arch User Repository]] for the man pages and documentation.
  
base-devel zlib lzo2 python2 ncurses openssl libx11 yajl
+
=== Modification of the bootloader ===
libaio glib2 base-devel bridge-utils iproute gettext
+
The boot loader must be modified to load a special Xen kernel (xen.gz) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.
dev86 bin86 iasl markdown git wget
+
+
optional packages:  ocaml ocaml-findlib
+
  
You will need to enable the 'extra' repository to get bin86. A tool such as [https://aur.archlinux.org/packages/yaourt/ yaourt] or [https://aur.archlinux.org/packages/packer/ packer] can aid in downloading, compiling and installing dependencies for AUR packages.
+
For grub users, the Xen package provides the ''/etc/grub.d/09_xen'' generator file. This file can be edited to customize the Xen boot commands. To allocate 512M of RAM to dom0 at boot for example, modify ''/etc/grub.d/09_xen'' by replacing the line:
 +
XEN_HYPERVISOR_CMDLINE="xsave=1"
  
== Configuring Xen ==
+
with
The following configuration steps are required once the Xen package is installed.
+
XEN_HYPERVISOR_CMDLINE="dom0_mem=512M xsave=1"
  
'''The dom0 host requires'''
+
After customizing add the required bootloader with the following command:
* an entry in the bootloader configuration file
+
# grub-mkconfig -o /boot/grub/grub.cfg
* systemd services to be started at boot time
+
* a xenfs filesystem mount point
+
* bridged networking configuration
+
  
In addition to these required steps, the current xen.org wiki has a section regarding [http://wiki.xen.org/wiki/Xen_Best_Practices best practices for running Xen.] It includes information on allocating a fixed amount of memory dom0 and how to dedicate (pin) a CPU core for dom0 use.
+
More information is available at [[Grub]].
  
=== Bootloader Configuration ===
+
=== Creation of a network bridge ===
Xen requires that you boot a special xen kernel (xen.gz) which in turn boots your system's normal kernel. A new bootloader entry is needed. To boot into the Xen system, we need a new menuentry in grub.cfg. The Xen package provides a grub2 generator file: ''/etc/grub.d/09_xen''. This file can be edited to customize the Xen boot commands, and will add a menuentry to your grub.cfg when the following command is run:
+
Xen requires that network communications between domUs and the dom0 (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the [http://wiki.xen.org/wiki/Xen_Networking Networking] article on the Xen wiki for details and ''/etc/xen/scripts'' for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in dom0 that every domU is attached to, can be setup by modifying the example configuration files provided by [[Netctl]] in ''etc/netctl/examples''. By default, Xen expects a bridge to exist named xenbr0. To set this up with netctl, do the following:
grub-mkconfig -o /boot/grub/grub.cfg
+
+
Example non-xen menuentry for LVM with gpt partition table
+
menuentry 'Arch ' {
+
  insmod part_gpt
+
  insmod lvm
+
  insmod ext2
+
  set root='lvm/vg0-arch'
+
  linux /boot/vmlinuz-linux root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
+
  initrd /boot/initramfs-linux.img
+
}
+
  
The menuentry to boot the same arch system after Xen has been installed. Get the UUID for ''lvm/vg0-arch'' by using ''blkid''.
+
# cd /etc/netctl
 +
# cp examples/bridge xenbridge-dhcp
  
  menuentry 'Arch Xen 4.2' {
+
make the following changes to /etc/netctl/xenbridge-dhcp:
  insmod lvm
+
  Description="Xen bridge connection"
  insmod part_gpt
+
Interface=xenbr0
  insmod ext2
+
Connection=bridge
  set root='lvm/vg0-arch'
+
BindsToInterface=(eth0) # Use the name of the external interface found with the 'ip link' command
  search --no-floppy --fs-uuid --set=root 346de8aa-6150-4d7b-a8c2-1c43f5929f99
+
IP=dhcp
  multiboot /boot/xen.gz placeholder dom0_mem=1024M
+
assuming your existing network connection is called eth0.  
  module /boot/vmlinuz-linux placeholder root=/dev/mapper/vg0-arch ro init=/usr/lib/systemd/systemd quiet
+
  module  /boot/initramfs-linux.img
+
}
+
  
Example for a physical partition
+
Start the network bridge with:
  Arch Linux(XEN)
+
  # netctl start xenbridge-dhcp
menuentry "Arch Linux(XEN)" {
+
    set root=(hd0,X)
+
  search --no-floppy --fs-uuid --set=root 346de8aa-6150-4d7b-a8c2-1c43f5929f99
+
    multiboot /boot/xen.gz dom0_mem=1024M
+
    module /boot/vmlinuz-linux-xen-dom0 root=/dev/sda ro
+
    module /boot/initramfs-linux-xen-dom0.img
+
}
+
More at [https://wiki.archlinux.org/index.php/Grub2 Grub2]
+
  
=== Systemd Services ===
+
when the prompt returns, check all is well: {{hc|# brctl show|
Issue the following commands as root so that the services are started at bootup:
+
bridge name bridge id STP enabled interfaces
# systemctl enable xenstored.service
+
xenbr0 8000.001a9206c0c0 no eth0
# systemctl enable xenconsoled.service
+
}}
  # systemctl enable xendomains.service
+
If the bridge is working it can be set to start automatically after rebooting with:
 +
  # netctl enable xenbridge-dhcp
  
=== Xenfs Mountpoint ===
+
=== Installation of Xen systemd services ===
Include in your ''/etc/fstab''
+
The Xen dom0 requires the xenstored, xenconsoled, and xendomains system services (see [[Systemd]] for details).
 +
 
 +
=== Confirming successful installation ===
 +
Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report the following when you run xl list (as root):
 +
{{hc|# xl list|
 +
Name                                        ID  Mem VCPUs State Time(s)
 +
Domain-0                                    0  511    2    r-----  41652.9}}
 +
Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.
 +
 
 +
In addition to the required steps above, see [http://wiki.xen.org/wiki/Xen_Best_Practices best practices for running Xen] which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for dom0 use. It also may be beneficial to create a xenfs filesystem mount point by including in ''/etc/fstab''
 
   none /proc/xen xenfs defaults 0 0
 
   none /proc/xen xenfs defaults 0 0
  
=== Bridged Networking ===
+
== Using Xen ==
Previous versions of Xen provided a bridge connection whereas Xen 4.2 requires that network communications between the guest, the host (and beyond) is set up separately. The use of both DHCP and static addressing is possible, and the choice should be determined by your network topology. With basic bridged networking, a virtual switch is created in dom0 that every domu is attached to. More complex setups are possible, see the [http://wiki.xen.org/wiki/Xen_Networking Networking] article on the Xen wiki for details.
+
Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domUs. In the following sections the steps for creating HVM and PV domUs running Arch Linux are described. In general, the steps for creating an HVM domU are independent of the domU OS and HVM domUs support a wide range of operating systems including microsot Windows. To use HVM domUs the dom0 hardware must have virtualization support. Paravirtualized domUs do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the [http://wiki.xen.org/wiki/Category:Guest_Install Guest Install] page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV domU. In general, HVM domUs often run slower than PV domUs since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM domUs, the processes are substantially different. In both cases, for each domU, a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each domU will need access to a copy of the installation ISO stored on the dom0 (see the [https://www.archlinux.org/download/ Download Page] to obtain the Arch Linux ISO).
  
Netcfg greatly simplifies network configuration and is now included as standard in the ''base'' package. Example configuration files are provided in ''etc/network.d/examples'' and Xen 4.2 provides scripts for various networking configurations in ''/etc/xen/scripts''.  
+
=== Create a domU "hard disk" ===
 +
Xen supports a number of different types of "hard disks" including [[LVM| Logical Volumes]], [[Partitioning|raw partitions]], and image files. To create a [[Wikipedia: Sparse file|sparse file]], that will grow to a maximum of 10GiB, called domU.img, use:
 +
truncate -s 10G domU.img
 +
If file IO speed is of greater importance than domain portability, using [[LVM|Logical Volumes]] or [[Partitioning|raw partitions]] may be a better choice.
  
By default, Xen expects a bridge to exist named xenbr0. To set this up with netcfg, do the following:
+
=== Create a domU configuration ===
 +
Each domU requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the [http://wiki.xensource.com/xenwiki/XenConfigurationFileOptions| Xen Wiki] or the xl.cfg man page. Both HVM and PV domUs share some components of the configuration file. These include
 +
<nowiki>
 +
name = "domU"
 +
memory = 256
 +
disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ]
 +
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
 +
</nowiki>
 +
The {{ic|name&#61;}} is the name by which the xl tools manage the domU and needs to be unique across all domUs. The {{ic|disk&#61;}} includes information about both the the installation media ({{ic|file:}}) and the partition created for the domU {{ic|phy}}. If an image file is being used instead of a physical partition, the {{ic|phy:}} needs to be changed to {{ic|file:}}. The {{ic|vif&#61;}} defines a network controller. The 00:16:3e MAC block is reserved for Xen domains, so the last three digits of the {{ic|mac&#61;}} must be randomly filled in (hex values 0-9 and a-f only).
  
# cd /etc/network.d
+
=== Managing a domU ===
  # cp examples/bridge xenbridge-dhcp
+
If a domU should be started on boot, create a symlink to the configuration file in /etc/xen/auto and ensure the xendomains service is set up correctly. Some useful commands for managing domUs are:
 +
  # xl top
 +
# xl list
 +
# xl console domUname
 +
# xl shutdown domUname
 +
# xl destroy domUname
  
make the following changes to xen-bridge:
+
== Configuring a hardware virtualized (HVM) Arch domU ==
INTERFACE="xenbr0"
+
In order to use HVM domUs install the {{Pkg|mesa-libgl}} and {{Pkg|bluez-libs}} packages.
BRIDGE_INTERFACE="eth0" # Use the name of the external interface found with the 'ip link' command
+
DESCRIPTION="Xen bridge connection"
+
  
assuming your existing eth0 connection is called eth0-dhcp,  
+
A minimal configuration file for a HVM Arch domU is:
edit /etc/conf.d/netcfg
+
<nowiki>
  NETWORKS=(eth0-dhcp xenbridge-dhcp)
+
name = 'HVM_domU'
 +
builder = 'hvm'
 +
memory = 256
 +
vcpus = 2
 +
disk = [ 'phy:/dev/mapper/vg0-hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ]
 +
  vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ]
 +
vnc = 1
 +
vnclisten = '0.0.0.0'
 +
vncdisplay = 1
 +
</nowiki>
 +
Since HVM machines do not have a console, they can only be connected to via a [[Vncserver|vncviewer]]. The configuration file allows for unauthenticated remote access of the domU vncserver and is not suitable for unsecured networks. The vncserver will be available on port 590X, where X is the value of {{ic|vncdisplay}}, of the dom0. The domU can be created with:
  
restart the network:
+
  # xl create /path/to/config/file
  systemctl restart netcfg.service
+
  
when the prompt returns, check all is well
+
and its status can be checked with
ip addr show
+
brctl show
+
+
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN
+
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
+
    inet 127.0.0.1/8 scope host lo
+
    inet6 ::1/128 scope host
+
      valid_lft forever preferred_lft forever
+
3: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
+
    link/ether 00:1a:92:06:c0:c0 brd ff:ff:ff:ff:ff:ff
+
    inet 192.168.1.3/24 brd 192.168.1.255 scope global xenbr0
+
    inet6 fe80::21a:92ff:fe06:c0c0/64 scope link
+
      valid_lft forever preferred_lft forever
+
+
bridge name bridge id STP enabled interfaces
+
xenbr0 8000.001a9206c0c0 no eth0
+
  
=== Final Steps ===
 
Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report show the following when you run xl list (as root):
 
 
  # xl list
 
  # xl list
Name                                        ID  Mem VCPUs State Time(s)
 
Domain-0                                    0  511    2    r-----  41652.9
 
Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.
 
 
== Using Xen ==
 
Once the dom0 is fully operational, domUs may be created / imported. Each OS has a slightly different method of installation, see the [http://wiki.xen.org/wiki/Category:Guest_Install Guest Install] page of the Xen wiki for links to instructions.
 
 
=== Creating a Paravirtualized (PV) Arch domU ===
 
This is how to install Arch as a user domain (or VM) on an already-running Xen host. To install Arch ''as'' the Xen host (dom0), see the previous section.
 
  
To begin, download the latest install ISO from the nearest mirror: [https://www.archlinux.org/download/ Dowload page]. Place the ISO file on the dom0 host. (it is recommended that its checksum be verified, too)
+
Once the domU is created, connect to it via the vncserver and install Arch Linux as described in the [[Installation Guide]].
  
Create the hard disks for the new domU. This can be done with [[LVM]], raw hard disk partitions or image files. To create a 10GiB blank hard disk file, the following command can be used:
+
== Configuring a paravirtualized (PV) Arch domU ==
  truncate -s 10G sda.img
+
A minimal configuration file for a PV Arch domU is:
This creates a sparse file, which grows (to a maximum of 10GiB) only when data is added to the image. If file IO speed is of greater importance than domain portability, using a [[LVM|Logical Volume]] or [[Partitioning|raw partition]] may be a better choice.  
+
<nowiki>
 +
name = "PV_domU"
 +
kernel = "/mnt/arch/boot/x86_64/vmlinuz"
 +
ramdisk = "/mnt/arch/boot/x86_64/archiso.img"
 +
extra = "archisobasedir=arch archisolabel=ARCH_201301"
 +
memory = 256
 +
disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]
 +
vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
 +
  </nowiki>
 +
This file needs to tweaked for your specific use. Most importantly, the {{ic|1=archisolabel=ARCH_201301}} line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from /x86_64/ to /i686/.
  
Next, loop-mount the installation ISO. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):
+
Before creating the domU, the installation ISO must be loop-mounted. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):
 
  # mount -o loop /path/to/iso /mnt
 
  # mount -o loop /path/to/iso /mnt
  
Create the bootstrap domU configuration file:
+
Once the ISO is mounted, the domU can be created with:
{{hc|/etc/xen/archdomu.cfg|<nowiki>kernel = "/mnt/arch/boot/x86_64/vmlinuz"
+
ramdisk = "/mnt/arch/boot/x86_64/archiso.img"
+
extra = "archisobasedir=arch archisolabel=ARCH_201301"
+
memory = 256
+
name = "archdomu"
+
disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]
+
vif = [ 'mac=00:16:3e:__random_three_mac_bytes__,bridge=xenbr0' ]</nowiki>}}
+
This file needs to tweaked for your specific use. Most importantly, the {{ic|1=archisolabel=ARCH_201301}} line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from /x86_64/ to /i686/. The {{ic|"phy:/path/to/partition,sda1,w"}} line must be edited to point to the partition created for the domU. If an image file is being used, the {{ic|phy:}} needs to be changed to {{ic|file:}}. Finally, a MAC address must be assigned. The 00:16:3e MAC block is reserved for Xen domains, do the last three digits may be randomly filled in (hex values 0-9 and a-f only). See the xl.cfg man page for more information on what the .cfg file lines do. The AUR package [https://aur.archlinux.org/packages/xen-docs/ xen-docs] will need to be installed to access the man pages.
+
  
Create the new domU:
+
  # xl create -c /path/to/config/file
  # xl create -c /etc/xen/archdomu.cfg
+
 
The -c option will enter the new domain's console when successfully created. At this point, Arch should be installed as usual. The [[Installation Guide]] should be followed. There will be a few deviations, however. The block devices listed in the disks line of the cfg file will show up as {{ic|/dev/xvd*}}. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the following modules must be added to {{ic|/etc/mkinitcpio.conf}}:
+
The -c option will enter the domU's console when successfully created and install Arch Linux as described in the [[Installation Guide]]. There will be a few deviations, however. The block devices listed in the disks line of the cfg file will show up as {{ic|/dev/xvd*}}. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the xen-blkfront xen-fbfront xen-netfront xen-kbdfront modules must be added to [[Mkinitcpio]]. Without these modules, the domU will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)
MODULES="xen-blkfront xen-fbfront xen-netfront xen-kbdfront"
+
Without these modules, the domU will not boot correctly. After saving the edit, rebuild the initramfs with the following command:
+
mkinitcpio -p linux
+
For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)
+
 
{{hc|/boot/grub/grub.cfg|<nowiki>menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {
 
{{hc|/boot/grub/grub.cfg|<nowiki>menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {
 
         insmod gzio
 
         insmod gzio
Line 205: Line 191:
 
The Arch domU is now set up. It may be started with the same line as before:
 
The Arch domU is now set up. It may be started with the same line as before:
 
  # xl create -c /etc/xen/archdomu.cfg
 
  # xl create -c /etc/xen/archdomu.cfg
If the domU should be started on boot, create a symlink to the cfg file in /etc/xen/auto and ensure the xendomains service is set up correctly.
 
 
=== Useful xl command examples ===
 
# xl top
 
# xl list
 
# xl console domUname
 
# xl shutdown domUname
 
# xl destroy domUname
 
  
 
== Common Errors ==
 
== Common Errors ==

Revision as of 05:42, 27 June 2013

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary link Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary end

From Xen Overview:

Xen is an open-source type-1 or baremetal hypervisor, which makes it possible to run many instances of an operating system or indeed different operating systems in parallel on a single machine (or host). Xen is the only type-1 hypervisor that is available as open source. Xen is used as the basis for a number of different commercial and open source applications, such as: server virtualization, Infrastructure as a Service (IaaS), desktop virtualization, security applications, embedded and hardware appliances.

Introduction

The Xen hypervisor is a thin layer of software which emulates a computer architecture allowing multiple operating systems to run simultaneously. The hypervisor is started by the boot loader of the computer it is installed on. Once the hypervisor is loaded, it starts the "dom0" (short for "domain 0", sometimes called the host or privileged domain) which in our case runs Arch Linux. Once the dom0 has started, one or more "domUs" (short for user domains, sometimes called VMs or guests) can be started and controlled from the dom0. Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domUs. See Xen.org for a full overview.

System requirements

The Xen hypervisor requires kernel level support which is included in recent Linux kernels and is built into the linux and linux-lts Arch kernel packages. To run run HVM domUs the physical hardware must have either Intel VT-x or AMD-V (SVM) virtualization support. In order to verify this, run the following command when the Xen hypervisor is not running:

$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If the above command does not produce output, then hardware virtualization support is unavailable and your hardware is unable to run HVM domUs. If you believe the CPU supports one of these features you should access the host system's BIOS configuration menu during the boot process and look if options related to virtualization support have been disabled. If such an option exists and is disabled, then enable it, boot the system and repeat the above command. The Xen hypervisor also supports PCI passthrough where PCI devices can be passed directly to the domU even in the absence of dom0 support for the device. In order to use PCI passthrough the CPU must support IOMMU/VT-d.

Configuring dom0

The Xen hypervisor relies on a full install of the base operating system. Before attempting to install the Xen hypevisor, the host machine should have a fully operational and up-to-date install of Arch Linux. This installation can be a minimal install with only the base package and does not require a Desktop Environment or even Xorg. If you are building a new host from scratch, see the Installation Guide for instructions on installing Arch Linux. The following configuration steps are required to convert a standard installation into a working dom0 running on top of the Xen hypevisor:

  • Installation of the Xen hypervisor
  • Modification of the bootloader to boot the Xen hypervisor
  • Creation of a network bridge
  • Installation of Xen systemd services

Installation of the Xen hypervisor

To install the Xen hypervisor install either the current stable xenAUR or the bleeding edge unstable xen-hg-unstableAUR packages available in the Arch User Repository. Both packages provide the Xen hypervisor, current xl interface and all configuration and support files, including systemd services. The multilib repository needs to be enabled to install Xen (See Pacman#Repositories for details). Install the xen-docsAUR package from the Arch User Repository for the man pages and documentation.

Modification of the bootloader

The boot loader must be modified to load a special Xen kernel (xen.gz) which is then used to boot the normal kernel. To do this a new bootloader entry is needed.

For grub users, the Xen package provides the /etc/grub.d/09_xen generator file. This file can be edited to customize the Xen boot commands. To allocate 512M of RAM to dom0 at boot for example, modify /etc/grub.d/09_xen by replacing the line:

XEN_HYPERVISOR_CMDLINE="xsave=1"

with

XEN_HYPERVISOR_CMDLINE="dom0_mem=512M xsave=1"

After customizing add the required bootloader with the following command:

# grub-mkconfig -o /boot/grub/grub.cfg

More information is available at Grub.

Creation of a network bridge

Xen requires that network communications between domUs and the dom0 (and beyond) be set up manually. The use of both DHCP and static addressing is possible, and the choice should be determined by the network topology. Complex setups are possible, see the Networking article on the Xen wiki for details and /etc/xen/scripts for scripts for various networking configurations. A basic bridged network, in which a virtual switch is created in dom0 that every domU is attached to, can be setup by modifying the example configuration files provided by Netctl in etc/netctl/examples. By default, Xen expects a bridge to exist named xenbr0. To set this up with netctl, do the following:

# cd /etc/netctl
# cp examples/bridge xenbridge-dhcp

make the following changes to /etc/netctl/xenbridge-dhcp:

Description="Xen bridge connection"
Interface=xenbr0
Connection=bridge
BindsToInterface=(eth0) # Use the name of the external interface found with the 'ip link' command
IP=dhcp

assuming your existing network connection is called eth0.

Start the network bridge with:

# netctl start xenbridge-dhcp
when the prompt returns, check all is well:
# brctl show
bridge name	bridge id		STP enabled	interfaces
xenbr0		8000.001a9206c0c0	no		eth0

If the bridge is working it can be set to start automatically after rebooting with:

# netctl enable xenbridge-dhcp

Installation of Xen systemd services

The Xen dom0 requires the xenstored, xenconsoled, and xendomains system services (see Systemd for details).

Confirming successful installation

Reboot your dom0 host and ensure that the Xen kernel boots correctly and that all settings survive a reboot. A properly set up dom0 should report the following when you run xl list (as root):

# xl list
Name                                        ID   Mem VCPUs	State	Time(s)
Domain-0                                     0   511     2     r-----   41652.9

Of course, the Mem, VCPUs and Time columns will be different depending on machine configuration and uptime. The important thing is that dom0 is listed.

In addition to the required steps above, see best practices for running Xen which includes information on allocating a fixed amount of memory and how to dedicate (pin) a CPU core for dom0 use. It also may be beneficial to create a xenfs filesystem mount point by including in /etc/fstab

 none /proc/xen xenfs defaults 0 0

Using Xen

Xen supports both paravirtualized (PV) and hardware virtualized (HVM) domUs. In the following sections the steps for creating HVM and PV domUs running Arch Linux are described. In general, the steps for creating an HVM domU are independent of the domU OS and HVM domUs support a wide range of operating systems including microsot Windows. To use HVM domUs the dom0 hardware must have virtualization support. Paravirtualized domUs do not require virtualization support, but instead require modifications to the guest operating system making the installation procedure different for each operating system (see the Guest Install page of the Xen wiki for links to instructions). Some operating systems (e.g., Microsoft Windows) cannot be installed as a PV domU. In general, HVM domUs often run slower than PV domUs since HVMs run on emulated hardware. While there are some common steps involved in setting up PV and HVM domUs, the processes are substantially different. In both cases, for each domU, a "hard disk" will need to be created and a configuration file needs to be written. Additionally, for installation each domU will need access to a copy of the installation ISO stored on the dom0 (see the Download Page to obtain the Arch Linux ISO).

Create a domU "hard disk"

Xen supports a number of different types of "hard disks" including Logical Volumes, raw partitions, and image files. To create a sparse file, that will grow to a maximum of 10GiB, called domU.img, use:

truncate -s 10G domU.img

If file IO speed is of greater importance than domain portability, using Logical Volumes or raw partitions may be a better choice.

Create a domU configuration

Each domU requires a separate configuration file that is used to create the virtual machine. Full details about the configuration files can be found at the Xen Wiki or the xl.cfg man page. Both HVM and PV domUs share some components of the configuration file. These include

 name = "domU"
 memory = 256
 disk = [ "file:/path/to/ISO,sdb,r", "phy:/path/to/partition,sda1,w" ]
 vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
 

The name= is the name by which the xl tools manage the domU and needs to be unique across all domUs. The disk= includes information about both the the installation media (file:) and the partition created for the domU phy. If an image file is being used instead of a physical partition, the phy: needs to be changed to file:. The vif= defines a network controller. The 00:16:3e MAC block is reserved for Xen domains, so the last three digits of the mac= must be randomly filled in (hex values 0-9 and a-f only).

Managing a domU

If a domU should be started on boot, create a symlink to the configuration file in /etc/xen/auto and ensure the xendomains service is set up correctly. Some useful commands for managing domUs are:

# xl top
# xl list
# xl console domUname
# xl shutdown domUname
# xl destroy domUname

Configuring a hardware virtualized (HVM) Arch domU

In order to use HVM domUs install the mesa-libgl and bluez-libs packages.

A minimal configuration file for a HVM Arch domU is:

 name = 'HVM_domU'
 builder = 'hvm'
 memory = 256
 vcpus = 2
 disk = [ 'phy:/dev/mapper/vg0-hvm_arch,xvda,w', 'file:/path/to/ISO,hdc:cdrom,r' ]
 vif = [ 'mac=00:16:3e:00:00:00,bridge=xenbr0' ]
 vnc = 1
 vnclisten = '0.0.0.0'
 vncdisplay = 1
 

Since HVM machines do not have a console, they can only be connected to via a vncviewer. The configuration file allows for unauthenticated remote access of the domU vncserver and is not suitable for unsecured networks. The vncserver will be available on port 590X, where X is the value of vncdisplay, of the dom0. The domU can be created with:

# xl create /path/to/config/file

and its status can be checked with

# xl list

Once the domU is created, connect to it via the vncserver and install Arch Linux as described in the Installation Guide.

Configuring a paravirtualized (PV) Arch domU

A minimal configuration file for a PV Arch domU is:

 name = "PV_domU"
 kernel = "/mnt/arch/boot/x86_64/vmlinuz"
 ramdisk = "/mnt/arch/boot/x86_64/archiso.img"
 extra = "archisobasedir=arch archisolabel=ARCH_201301"
 memory = 256
 disk = [ "phy:/path/to/partition,sda1,w", "file:/path/to/ISO,sdb,r" ]
 vif = [ 'mac=00:16:3e:XX:XX:XX,bridge=xenbr0' ]
 

This file needs to tweaked for your specific use. Most importantly, the archisolabel=ARCH_201301 line must be edited to use the release year/month of the ISO being used. If you want to install 32-bit Arch, change the kernel and ramdisk paths from /x86_64/ to /i686/.

Before creating the domU, the installation ISO must be loop-mounted. To do this, ensure the directory /mnt exists and is empty, then run the following command (being sure to fill in the correct ISO path):

# mount -o loop /path/to/iso /mnt

Once the ISO is mounted, the domU can be created with:

# xl create -c /path/to/config/file

The -c option will enter the domU's console when successfully created and install Arch Linux as described in the Installation Guide. There will be a few deviations, however. The block devices listed in the disks line of the cfg file will show up as /dev/xvd*. Use these devices when partitioning the domU. After installation and before the domU is rebooted, the xen-blkfront xen-fbfront xen-netfront xen-kbdfront modules must be added to Mkinitcpio. Without these modules, the domU will not boot correctly. For booting, it is not necessary to install Grub. Xen has a Python-based grub emulator, so all that is needed to boot is a grub.cfg file: (It may be necessary to create the /boot/grub directory)

/boot/grub/grub.cfg
menuentry 'Arch GNU/Linux, with Linux core repo kernel' --class arch --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-core repo kernel-true-__UUID__' {
        insmod gzio
        insmod part_msdos
        insmod ext2
        set root='hd0,msdos1'
        if [ x$feature_platform_search_hint = xy ]; then
          search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  __UUID__
        else
          search --no-floppy --fs-uuid --set=root __UUID__
        fi
        echo    'Loading Linux core repo kernel ...'
        linux   /boot/vmlinuz-linux root=UUID=__UUID__ ro
        echo    'Loading initial ramdisk ...'
        initrd  /boot/initramfs-linux.img
}

This file must be edited to match the UUID of the root partition. From within the domU, run the following command:

# blkid

Replace all instances of __UUID__ with the real UUID of the root partition (the one that mounts as "/").

Shutdown the domU with the poweroff command. The console will be returned to the hypervisor when the domain is fully shut down, and the domain will no longer appear in the xl domains list. Now the ISO file may be unmounted:

# umount /mnt

The domU cfg file should now be edited. Delete the "kernel = ", "ramdisk = ", and "extra = " lines and replace them with the following line:

bootloader = "pygrub"

Also remove the ISO disk from the "disk = " line.

The Arch domU is now set up. It may be started with the same line as before:

# xl create -c /etc/xen/archdomu.cfg

Common Errors

  • 'xl list' complains about libxl

- Either you have not booted into the Xen system, or xen modules listed in xencommons script are not installed

  • xl create fails

- check the guest's kernel is located correctly, check the pv-xxx.cfg file for spelling mistakes (like using initrd instead of ramdisk)

  • Arch linux guest hangs with a ctrl-d message

- press ctrl-d until you get back to a prompt, rebuild its initramfs described

  • Error message "failed to execute '/usr/lib/udev/socket:/org/xen/xend/udev_event' 'socket:/org/xen/xend/udev_event': No such file or directory"

- caused by /etc/udev/rules.d/xend.rules; xend is (a) deprecated and (b) not used, so it is safe to remove xend.rules

Resources