Difference between revisions of "Xen"

From ArchWiki
Jump to: navigation, search
m (assorted fixes)
Line 5: Line 5:
  
 
==What is Xen?==
 
==What is Xen?==
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."
+
According to the Xen development team:
 +
:"''The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems.''"
  
 
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader and allows several operating systems to run simultaneously on top of it.  Once the Xen hypervisor is loaded, it starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris).  The [http://aur.archlinux.org/packages.php?ID=29023 dom0 kernel in the AUR] is currently based on a recent version of Linux kernel 2.6 and there is [http://aur.archlinux.org/packages.php?ID=38175 a more unstable -dev version] as well; hardware must, of course, be supported by this kernel to run Xen.  Once the dom0 has started, one or more "domU" (unprivileged) domains can be started and controlled from dom0.
 
The Xen hypervisor is a thin layer of software which emulates a computer architecture.  It is started by the boot loader and allows several operating systems to run simultaneously on top of it.  Once the Xen hypervisor is loaded, it starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris).  The [http://aur.archlinux.org/packages.php?ID=29023 dom0 kernel in the AUR] is currently based on a recent version of Linux kernel 2.6 and there is [http://aur.archlinux.org/packages.php?ID=38175 a more unstable -dev version] as well; hardware must, of course, be supported by this kernel to run Xen.  Once the dom0 has started, one or more "domU" (unprivileged) domains can be started and controlled from dom0.
  
 
==Setting up Xen==
 
==Setting up Xen==
 
 
===Installing the necessary packages===
 
===Installing the necessary packages===
Before building xen, be sure you have gcc, make, patch, and python2 installed.
+
Before building xen, be sure you have {{Package Official|gcc}}, {{Package Official|make}}, {{Package Official|patch}}, and {{Package Official|python2}} installed.
<pre>pacman -S gcc make patch python2</pre>
+
pacman -S gcc make patch python2
  
The new xen package, contains Xen 4 and resolves almost all necessary dependencies automatically. But, due to changes in the official Python version of Arch Linux, some old scripts show errors when they're executed. To solve this issue, download python2.5 from the [[AUR]].
+
The new xen package, contains Xen 4 and resolves almost all necessary dependencies automatically. But, due to changes in the official Python version of Arch Linux, some old scripts show errors when they are executed. To solve this issue, download python2.5 from the [[AUR]].
  
 
And, when we are asked to edit PKGBUILD (preferably with nano), we must not forget to replace this:
 
And, when we are asked to edit PKGBUILD (preferably with nano), we must not forget to replace this:
<pre>make PYTHON=python2 DESTDIR=$pkgdir  install-xen
+
make PYTHON=python2 DESTDIR=$pkgdir  install-xen
make PYTHON=python2 DESTDIR=$pkgdir  install-tools
+
make PYTHON=python2 DESTDIR=$pkgdir  install-tools
make PYTHON=python2 DESTDIR=$pkgdir  install-docs</pre>
+
make PYTHON=python2 DESTDIR=$pkgdir  install-docs
 
with this:
 
with this:
<pre>make PYTHON=python2.5 DESTDIR=$pkgdir  install-xen
+
make PYTHON=python2.5 DESTDIR=$pkgdir  install-xen
make PYTHON=python2.5 DESTDIR=$pkgdir  install-tools
+
make PYTHON=python2.5 DESTDIR=$pkgdir  install-tools
make PYTHON=python2.5 DESTDIR=$pkgdir  install-docs
+
make PYTHON=python2.5 DESTDIR=$pkgdir  install-docs
  
sed -i -e "s|#![ ]*/usr/bin/python$|#!/usr/bin/python2.5|" \
+
sed -i -e "s|#![ ]*/usr/bin/python$|#!/usr/bin/python2.5|" \
-e "s|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2.5|" \
+
-e "s|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2.5|" \
$(find $pkgdir -name '*.py')</pre>
+
$(find $pkgdir -name '*.py')
  
 
Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. [http://aur.archlinux.org/packages.php?ID=37421 xen-tools] is also available in the AUR.
 
Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. [http://aur.archlinux.org/packages.php?ID=37421 xen-tools] is also available in the AUR.
  
 
The next step is to build and install the dom0 kernel. To do so, build the [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0] package from the AUR.
 
The next step is to build and install the dom0 kernel. To do so, build the [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0] package from the AUR.
 
  
 
'''Please note:''' currently the kernel26-xen-dom0 is marked as out of date in AUR, and is not yet updated to 2.6.36. So you have to run through the new parameters of 2.6.36 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked.
 
'''Please note:''' currently the kernel26-xen-dom0 is marked as out of date in AUR, and is not yet updated to 2.6.36. So you have to run through the new parameters of 2.6.36 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked.
  
The building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.
+
The building part has been finished. Now you can configure Grub and boot into the kernel that has just been built.
  
 
===Configuring GRUB===
 
===Configuring GRUB===
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:
+
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to {{Filename|/boot/grub/menu.lst}}:
 +
title Xen with Arch Linux
 +
root (hd0,X)
 +
kernel /xen.gz dom0_mem=524288
 +
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0
 +
module /kernel26-xen-dom0.img
  
<pre>
+
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of {{Filename|/dev/sdaY}} you can also fill in {{Filename|/dev/mapper/somelvm}}.
title Xen with Arch Linux
+
root (hd0,X)
+
kernel /xen.gz dom0_mem=524288
+
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0
+
module /kernel26-xen-dom0.img
+
</pre>
+
  
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm.
+
The standard arch kernel can be use to boot the domUs.  In order for this to work one must add 'xen-blkfront' to the modules array in {{Filename|/etc/mkinitcpio.conf}}:
 
+
MODULES="... xen-blkfront ..."
The standard arch kernel can be use to boot the domUs.  In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:
+
 
+
<pre>
+
MODULES="... xen-blkfront ..."
+
</pre>
+
  
 
So, next step is to reboot into the xen kernel.  
 
So, next step is to reboot into the xen kernel.  
  
 
Next step: start xend:
 
Next step: start xend:
 +
# /etc/rc.d/xend start
  
<pre>
+
Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.
# /etc/rc.d/xend start
+
</pre>
+
 
+
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.
+
  
 
===Configuring GRUB2===
 
===Configuring GRUB2===
  
 
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:
 
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:
<pre>
+
# (2) Arch Linux(XEN)
# (2) Arch Linux(XEN)
+
menuentry "Arch Linux(XEN)" {
menuentry "Arch Linux(XEN)" {
+
    set root=(hd0,X)
    set root=(hd0,X)
+
    multiboot /boot/xen.gz dom0_mem=2048M
    multiboot /boot/xen.gz dom0_mem=2048M
+
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
+
    module /boot/kernel26-xen-dom0.gz
    module /boot/kernel26-xen-dom0.gz
+
}
}
+
</pre>
+
 
If you had success when booting up into the dom0 kernel, we can continue.
 
If you had success when booting up into the dom0 kernel, we can continue.
  
Line 142: Line 130:
  
 
===Hardware Virtualization===
 
===Hardware Virtualization===
 
 
If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel-VT or AMD-V virtualization support. In order to verify this, run the following commands on the host system:
 
If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel-VT or AMD-V virtualization support. In order to verify this, run the following commands on the host system:
  
 
For Intel CPUs:
 
For Intel CPUs:
<pre>grep vmx /proc/cpuinfo</pre>
+
grep vmx /proc/cpuinfo
  
 
For AMD CPUSs:
 
For AMD CPUSs:
<pre>grep svm /proc/cpuinfo</pre>
+
grep svm /proc/cpuinfo
  
 
If neither of the above commands produce output then it is likely these features are unavailable and that your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system’s BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, enable it, boot the system and repeat the above commands.
 
If neither of the above commands produce output then it is likely these features are unavailable and that your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system’s BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, enable it, boot the system and repeat the above commands.
  
 
==Arch as Xen guest (PV mode)==
 
==Arch as Xen guest (PV mode)==
 
 
To get paravirtualization you need to install:
 
To get paravirtualization you need to install:
 
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]
 
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]
Line 163: Line 149:
 
   xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub
 
   xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub
  
Edit /boot/grub/menu.lst and add kernel26-xen:
+
Edit {{Filename|/boot/grub/menu.lst}} and add kernel26-xen:
 
   # (1) Arch Linux (domU)
 
   # (1) Arch Linux (domU)
 
   title  Arch Linux (domU)
 
   title  Arch Linux (domU)
Line 170: Line 156:
 
   initrd /boot/kernel26-xen.img
 
   initrd /boot/kernel26-xen.img
  
Add the following xen modules to your initcpio by appending the following to MODULES in /etc/mkinitcpio.conf: "xen-blkfront xen-fbfront xenfs xen-netfront xen-kbdfront" and rebuild your initcpio:
+
Add the following xen modules to your initcpio by appending the following to {{Codeline|MODULES}} in {{Filename|/etc/mkinitcpio.conf}}: {{Codeline|"xen-blkfront xen-fbfront xenfs xen-netfront xen-kbdfront"}} and rebuild your initramfs:
 
   mkinitcpio -p kernel26-xen
 
   mkinitcpio -p kernel26-xen
  
Uncomment the following line in /etc/inittab to enable console login:
+
Uncomment the following line in {{Filename|/etc/inittab}} to enable console login:
 
   h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
 
   h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
  
 
=== xe-guest-utilities ===
 
=== xe-guest-utilities ===
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:
+
To use xe-guest-utilities, add xenfs mount point into {{Filename|/etc/fstab}}:
 
   xenfs                  /proc/xen    xenfs    defaults            0      0
 
   xenfs                  /proc/xen    xenfs    defaults            0      0
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.
+
and add {{Codeline|xe-linux-distribution}} to the {{Codeline|DAEMONS}} array in {{Filename|/etc/rc.conf}}.
  
 
===Notes===
 
===Notes===
* pygrub does not show boot menu. And some version pygrub doesn't support lzma compressed stock kernel. see https://bbs.archlinux.org/viewtopic.php?id=118525
+
* pygrub does not show boot menu. And some version pygrub does not support lzma compressed stock kernel. see https://bbs.archlinux.org/viewtopic.php?id=118525
* i686 stock kernel doesn't support xen due to highmem, but x86_64 support. See https://bugs.archlinux.org/task/24207?project=1. But you could still use i686/x86_64 kernel26-xen as above mentioned and this kernel is compressed by gzip, not lzma or xz
+
* i686 stock kernel does not support xen due to highmem, but x86_64 support. See https://bugs.archlinux.org/task/24207?project=1. But you could still use i686/x86_64 kernel26-xen as above mentioned and this kernel is compressed by gzip, not lzma or xz
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")
+
* To avoid hwclock error messages, set {{Codeline|1=HARDWARECLOCK="xen"}} in {{Filename|/etc/rc.conf}} (actually you can use any value here except {{Codeline|"UTC"}} and {{Codeline|"localtime"}})
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"
+
* If you want to return to hardware VM, set {{Codeline|1=HVM-boot-policy="BIOS order"}}
* If you get a kernel panic when booting Xen and it suggests 'use apic="debug" and send an error report', try setting noapic on the kernel line in menu.lst
+
* If you get a kernel panic when booting Xen and it suggests 'use {{Codeline|1=apic="debug"}} and send an error report', try setting {{Codeline|noapic}} on the kernel line in {{Filename|menu.lst}}
  
 
==Xen Management Tools==
 
==Xen Management Tools==
 
The "Virtual Machine Manager" application is a desktop user interface for managing virtual machines. It presents a summary view of running domains, their live performance & resource utilization statistics. The detailed view graphs performance & utilization over time. Wizards enable the creation of new domains, and configuration & adjustment of a domain's resource allocation & virtual hardware. An embedded VNC client viewer presents a full graphical console to the guest domain.
 
The "Virtual Machine Manager" application is a desktop user interface for managing virtual machines. It presents a summary view of running domains, their live performance & resource utilization statistics. The detailed view graphs performance & utilization over time. Wizards enable the creation of new domains, and configuration & adjustment of a domain's resource allocation & virtual hardware. An embedded VNC client viewer presents a full graphical console to the guest domain.
<pre>yaourt -S virt-manager-light</pre>
+
yaourt -S virt-manager-light
  
 
==Useful Packages==
 
==Useful Packages==
Line 200: Line 186:
  
 
==Resources==
 
==Resources==
 
+
* [http://www.xen.org/ Xen's homepage]
* Xen's homepage: [http://www.xen.org/]
+
* [http://wiki.xensource.com/xenwiki/ The Xen Wiki]
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]
+
* [http://code.google.com/p/gentoo-xen-kernel/ Xen kernel patches]
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]
+
* [http://www.virtuatopia.com/index.php/Virtualizing_Windows_Server_2008_with_Xen Virtuatopia guide to get Windows Server 2008 working with Xen]
* Virtuatopia guide to get Windows Server 2008 working with Xen: [http://www.virtuatopia.com/index.php/Virtualizing_Windows_Server_2008_with_Xen]
+

Revision as of 03:16, 18 September 2011

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.


Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어


External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: kernel26 (Discuss in Talk:Xen#)

This document explains how to setup Xen within Arch Linux.

What is Xen?

According to the Xen development team:

"The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."

The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, it starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on a recent version of Linux kernel 2.6 and there is a more unstable -dev version as well; hardware must, of course, be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (unprivileged) domains can be started and controlled from dom0.

Setting up Xen

Installing the necessary packages

Before building xen, be sure you have Template:Package Official, Template:Package Official, Template:Package Official, and Template:Package Official installed.

pacman -S gcc make patch python2

The new xen package, contains Xen 4 and resolves almost all necessary dependencies automatically. But, due to changes in the official Python version of Arch Linux, some old scripts show errors when they are executed. To solve this issue, download python2.5 from the AUR.

And, when we are asked to edit PKGBUILD (preferably with nano), we must not forget to replace this:

make PYTHON=python2 DESTDIR=$pkgdir  install-xen
make PYTHON=python2 DESTDIR=$pkgdir  install-tools
make PYTHON=python2 DESTDIR=$pkgdir  install-docs

with this:

make PYTHON=python2.5 DESTDIR=$pkgdir  install-xen
make PYTHON=python2.5 DESTDIR=$pkgdir  install-tools
make PYTHON=python2.5 DESTDIR=$pkgdir  install-docs
sed -i -e "s|#![ ]*/usr/bin/python$|#!/usr/bin/python2.5|" \
-e "s|#![ ]*/usr/bin/env python$|#!/usr/bin/env python2.5|" \
$(find $pkgdir -name '*.py')

Xen-tools is a collection of simple perl scripts which allow you to easily create new guest Xen domains. xen-tools is also available in the AUR.

The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.

Please note: currently the kernel26-xen-dom0 is marked as out of date in AUR, and is not yet updated to 2.6.36. So you have to run through the new parameters of 2.6.36 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked.

The building part has been finished. Now you can configure Grub and boot into the kernel that has just been built.

Configuring GRUB

Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to Template:Filename:

title Xen with Arch Linux
root (hd0,X)
kernel /xen.gz dom0_mem=524288
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0
module /kernel26-xen-dom0.img

where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of Template:Filename you can also fill in Template:Filename.

The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in Template:Filename:

MODULES="... xen-blkfront ..."

So, next step is to reboot into the xen kernel.

Next step: start xend:

# /etc/rc.d/xend start

Allocating a fixed amount of memory is recommended when using xen. Also, if you are running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external XenCommonProblems wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information.

Configuring GRUB2

This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:

# (2) Arch Linux(XEN)
menuentry "Arch Linux(XEN)" {
    set root=(hd0,X)
    multiboot /boot/xen.gz dom0_mem=2048M
    module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro
    module /boot/kernel26-xen-dom0.gz
}

If you had success when booting up into the dom0 kernel, we can continue.

Add domU instances

The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.

$ mkfs.ext4 /dev/sdb1    ## format partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir -p /tmp/install/{dev,proc,sys} /tmp/install/var/lib/pacman /tmp/install/var/cache/pacman/pkg
$ mount -o bind /dev /tmp/install/dev
$ mount -t proc none /tmp/install/proc
$ mount -o bind /sys /tmp/install/sys
$ pacman -Sy -r /tmp/install --cachedir /tmp/install/var/cache/pacman/pkg -b /tmp/install/var/lib/pacman base
$ cp -r /etc/pacman* /tmp/install/etc
$ chroot /tmp/install /bin/bash
$ vi /etc/resolv.conf
$ vi /etc/fstab
    /dev/xvda               /           ext4    defaults                0       1

$ vi /etc/inittab
    c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux
    #c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux
    #c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux
    #c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux
    #c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux
    #c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux
    #c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux


$ exit  ## exit chroot
$ umount /tmp/install/dev
$ umount /tmp/install/proc
$ umount /tmp/install/sys
$ umount /tmp/install

If not starting from a fresh install and one wants to rsync from an existing system:

$ mkfs.ext4 /dev/sdb1    ## format lv partition
$ mkdir /tmp/install
$ mount /dev/sdb1 /tmp/install
$ mkdir /tmp/install/{proc,sys}
$ chmod 555 /tmp/install/proc
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/

$ vi /etc/xen/dom01     ## create config file
    #  -*- mode: python; -*-
    kernel = "/boot/vmlinuz26"
    ramdisk = "/boot/kernel26.img"
    memory = 1024
    name = "dom01"
    vif = [ 'mac=00:16:3e:00:01:01' ]
    disk = [ 'phy:/dev/sdb1,xvda,w' ]
    dhcp="dhcp"
    hostname = "ooga"
    root = "/dev/xvda ro"

$ xm create -c dom01

Hardware Virtualization

If we want to get hardware virtualization in our domUs, the host system hardware must include either Intel-VT or AMD-V virtualization support. In order to verify this, run the following commands on the host system:

For Intel CPUs:

grep vmx /proc/cpuinfo

For AMD CPUSs:

grep svm /proc/cpuinfo

If neither of the above commands produce output then it is likely these features are unavailable and that your hardware is unable to run Xen HVM guests. It is also possible that the host CPU supports one of these features, but that the functionality is disabled by default in the system BIOS. To verify this, access the host system’s BIOS configuration menu during the boot process and look for an option related to virtualization support. If such an option exists and is disabled, enable it, boot the system and repeat the above commands.

Arch as Xen guest (PV mode)

To get paravirtualization you need to install:

Change mode to PV with commands (on dom0):

 xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""
 xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub

Edit Template:Filename and add kernel26-xen:

 # (1) Arch Linux (domU)
 title  Arch Linux (domU)
 root   (hd0,0)
 kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0
 initrd /boot/kernel26-xen.img

Add the following xen modules to your initcpio by appending the following to Template:Codeline in Template:Filename: Template:Codeline and rebuild your initramfs:

 mkinitcpio -p kernel26-xen

Uncomment the following line in Template:Filename to enable console login:

 h0:2345:respawn:/sbin/agetty -8 38400 hvc0 linux

xe-guest-utilities

To use xe-guest-utilities, add xenfs mount point into Template:Filename:

 xenfs                  /proc/xen     xenfs     defaults            0      0

and add Template:Codeline to the Template:Codeline array in Template:Filename.

Notes

Xen Management Tools

The "Virtual Machine Manager" application is a desktop user interface for managing virtual machines. It presents a summary view of running domains, their live performance & resource utilization statistics. The detailed view graphs performance & utilization over time. Wizards enable the creation of new domains, and configuration & adjustment of a domain's resource allocation & virtual hardware. An embedded VNC client viewer presents a full graphical console to the guest domain.

yaourt -S virt-manager-light

Useful Packages

As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)

  • Open source multiplatform clone of XenCenter frontend: openxencenter
  • Open source multiplatform clone of XenCenter frontend (svn version): openxencenter-svn
  • Xen Cloud Platform frontend: xvp

Resources