https://wiki.archlinux.org/api.php?action=feedcontributions&user=Friesoft&feedformat=atomArchWiki - User contributions [en]2024-03-29T08:04:49ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Systemd&diff=278567Systemd2013-10-13T12:08:02Z<p>Friesoft: Corrected graphical target and display-manager.service</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Daemons and system services]]<br />
[[Category:Boot process]]<br />
[[ar:Systemd]]<br />
[[es:Systemd]]<br />
[[fr:Systemd]]<br />
[[it:Systemd]]<br />
[[ja:Systemd]]<br />
[[pt:Systemd]]<br />
[[ru:Systemd]]<br />
[[zh-CN:Systemd]]<br />
[[zh-TW:Systemd]]<br />
{{Article summary start}}<br />
{{Article summary text|Covers how to install and configure systemd.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|systemd/User}}<br />
{{Article summary wiki|systemd/Services}}<br />
{{Article summary wiki|systemd/cron functionality}}<br />
{{Article summary wiki|systemd FAQ}}<br />
{{Article summary wiki|init Rosetta}}<br />
{{Article summary wiki|Daemons List}}<br />
{{Article summary wiki|udev}}<br />
{{Article summary wiki|Improve Boot Performance}}<br />
{{Article summary end}}<br />
From the [http://freedesktop.org/wiki/Software/systemd project web page]:<br />
<br />
:''systemd'' is a system and service manager for Linux, compatible with SysV and LSB init scripts. ''systemd'' provides aggressive parallelization capabilities, uses socket and [[D-Bus]] activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux [[cgroups|control groups]], supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic.<br />
<br />
{{Note|1=For a detailed explanation as to why Arch has moved to ''systemd'', see [https://bbs.archlinux.org/viewtopic.php?pid=1149530#p1149530 this forum post].}}<br />
<br />
== Migration from SysVinit/initscripts ==<br />
<br />
{{Note|<br />
* {{Pkg|systemd}} and {{Pkg|systemd-sysvcompat}} are both installed by default on installation media newer than [https://www.archlinux.org/news/systemd-is-now-the-default-on-new-installations/ 2012-10-13]. This section is aimed at Arch Linux installations that still rely on ''sysvinit'' and ''initscripts''.<br />
* If you are running Arch Linux inside a VPS, please see [[Virtual Private Server#Moving your VPS from initscripts to systemd]].<br />
}}<br />
<br />
=== Considerations before switching ===<br />
<br />
* Do [http://freedesktop.org/wiki/Software/systemd/ some reading] about ''systemd''.<br />
* Note the fact that systemd has a ''journal'' system that replaces ''syslog'', although the two can co-exist. See [[#Journal]].<br />
* While ''systemd'' can replace some of the functionality of ''cron'', ''acpid'', or ''xinetd'', there is no need to switch away from using the traditional daemons unless you want to.<br />
* Interactive ''initscripts'' are not working with ''systemd''. In particular, ''netcfg-menu'' cannot be used at system start-up ({{Bug|31377}}).<br />
<br />
=== Installation procedure ===<br />
<br />
# [[pacman|Install]] {{Pkg|systemd}} from the [[official repositories]].<br />
# Append the following to your [[kernel parameters]]: {{ic|1=init=/usr/lib/systemd/systemd}}.<br />
# Once completed you may enable any desired services via the use of {{ic|systemctl enable ''service_name''}} (this roughly equates to what you included in the {{ic|DAEMONS}} array. New names can be found in [[Daemons List]]).<br />
# Reboot your system and verify that ''systemd'' is currently active by issuing the following command: {{ic|cat /proc/1/comm}}. This should return the string {{ic|systemd}}.<br />
# Make sure your hostname is set correctly under ''systemd'': {{ic|hostnamectl set-hostname ''myhostname''}} or {{ic|/etc/hostname}}.<br />
# Proceed to remove ''initscripts'' and ''sysvinit'' from your system and install {{Pkg|systemd-sysvcompat}}.<br />
# Optionally, remove the {{ic|1=init=/usr/lib/systemd/systemd}} parameter. It is no longer needed since {{Pkg|systemd-sysvcompat}} provides a symlink to ''systemd'''s init where ''sysvinit'' used to be.<br />
<br />
=== Supplementary information ===<br />
<br />
* If you have {{ic|quiet}} in your kernel parameters, you might want to remove it for your first couple of systemd boots, to assist with identifying any issues during boot.<br />
<br />
* It is not necessary to add your user to [[Users and Groups|groups]] ({{ic|sys}}, {{ic|disk}}, {{ic|lp}}, {{ic|network}}, {{ic|video}}, {{ic|audio}}, {{ic|optical}}, {{ic|storage}}, {{ic|scanner}}, {{ic|power}}, etc.) for most use cases with systemd. The groups can even cause some functionality to break. For example, the {{ic|audio}} group will break fast user switching and allows applications to block software mixing. Every PAM login provides a logind session, which for a local session will give you permissions via [[Wikipedia:Access control list|POSIX ACLs]] on audio/video devices, and allow certain operations like mounting removable storage via [[udisks]].<br />
<br />
* See the [[Network Configuration]] article for how to set up networking targets.<br />
<br />
== Basic systemctl usage ==<br />
<br />
The main command used to introspect and control ''systemd'' is '''systemctl'''. Some of its uses are examining the system state and managing the system and services. See {{ic|man 1 systemctl}} for more details.<br />
<br />
{{Tip|You can use all of the following ''systemctl'' commands with the {{ic|-H ''user''@''host''}} switch to control a ''systemd'' instance on a remote machine. This will use [[SSH]] to connect to the remote ''systemd'' instance.}}<br />
<br />
{{Note|''systemadm'' is the official graphical frontend for ''systemctl''. It is provided by the {{AUR|systemd-ui-git}} package from the [[AUR]].}}<br />
<br />
=== Analyzing the system state ===<br />
<br />
List running units:<br />
<br />
$ systemctl<br />
<br />
or:<br />
<br />
$ systemctl list-units<br />
<br />
List failed units:<br />
<br />
$ systemctl --failed<br />
<br />
The available unit files can be seen in {{ic|/usr/lib/systemd/system/}} and {{ic|/etc/systemd/system/}} (the latter takes precedence). You can see a list of the installed unit files with:<br />
<br />
$ systemctl list-unit-files<br />
<br />
=== Using units ===<br />
<br />
Units can be, for example, services (''.service''), mount points (''.mount''), devices (''.device'') or sockets (''.socket'').<br />
<br />
When using ''systemctl'', you generally have to specify the complete name of the unit file, including its suffix, for example ''sshd.socket''. There are however a few short forms when specifying the unit in the following ''systemctl'' commands:<br />
<br />
* If you do not specify the suffix, systemctl will assume ''.service''. For example, {{ic|netcfg}} and {{ic|netcfg.service}} are equivalent.<br />
* Mount points will automatically be translated into the appropriate ''.mount'' unit. For example, specifying {{ic|/home}} is equivalent to {{ic|home.mount}}.<br />
* Similar to mount points, devices are automatically translated into the appropriate ''.device'' unit, therefore specifying {{ic|/dev/sda2}} is equivalent to {{ic|dev-sda2.device}}.<br />
<br />
See {{ic|man systemd.unit}} for details.<br />
<br />
Activate a unit immediately:<br />
<br />
# systemctl start ''unit''<br />
<br />
Deactivate a unit immediately:<br />
<br />
# systemctl stop ''unit''<br />
<br />
Restart a unit:<br />
<br />
# systemctl restart ''unit''<br />
<br />
Ask a unit to reload its configuration:<br />
<br />
# systemctl reload ''unit''<br />
<br />
Show the status of a unit, including whether it is running or not:<br />
<br />
$ systemctl status ''unit''<br />
<br />
Check whether a unit is already enabled or not:<br />
<br />
$ systemctl is-enabled ''unit''<br />
<br />
Enable a unit to be started on bootup:<br />
<br />
# systemctl enable ''unit''<br />
<br />
{{Note|Services without an {{ic|[Install]}} section are usually called automatically by other services. If you need to install them manually, use the following command, replacing ''foo'' with the name of the service.<br />
# ln -s /usr/lib/systemd/system/''foo''.service /etc/systemd/system/graphical.target.wants/<br />
}}<br />
<br />
Disable a unit to not start during bootup:<br />
<br />
# systemctl disable ''unit''<br />
<br />
Show the manual page associated with a unit (this has to be supported by the unit file):<br />
<br />
$ systemctl help ''unit''<br />
<br />
Reload ''systemd'', scanning for new or changed units:<br />
<br />
# systemctl daemon-reload<br />
<br />
=== Power management ===<br />
<br />
[[polkit]] is necessary for power management. If you are in a local ''systemd-logind'' user session and no other session is active, the following commands will work without root privileges. If not (for example, because another user is logged into a tty), ''systemd'' will automatically ask you for the root password.<br />
<br />
Shut down and reboot the system:<br />
<br />
$ systemctl reboot<br />
<br />
Shut down and power-off the system:<br />
<br />
$ systemctl poweroff<br />
<br />
Suspend the system:<br />
<br />
$ systemctl suspend<br />
<br />
Put the system into hibernation:<br />
<br />
$ systemctl hibernate<br />
<br />
Put the system into hybrid-sleep state (or suspend-to-both):<br />
<br />
$ systemctl hybrid-sleep<br />
<br />
== Running DMs under systemd ==<br />
<br />
{{Merge|Display Manager|We have separate article, this section should be moved there to keep things in one place.}}<br />
<br />
To enable graphical login, run your preferred [[Display Manager]] daemon (e.g. [[KDM]]). At the moment, service files exist for [[GDM]], [[KDM]], [[SLiM]], [[XDM]], [[LXDM]], [[LightDM]], and {{AUR|SDDM}}.<br />
<br />
# systemctl enable kdm<br />
<br />
This should work out of the box. If not, you might have a ''default.target'' set manually or from an older install:<br />
<br />
{{hc|# ls -l /usr/lib/systemd/system/default.target|<br />
/usr/lib/systemd/system/default.target -> /usr/lib/systemd/system/graphical.target}}<br />
<br />
Simply delete the symlink and ''systemd'' will use its stock ''default.target'' (i.e. ''graphical.target'').<br />
<br />
# rm /etc/systemd/system/default.target<br />
<br />
After enabling kdm a symlink "display-manager.service" should be set in /etc/systemd/system/<br />
<br />
{{hc|# ls -l /etc/systemd/system/display-manager.service|<br />
/etc/systemd/system/display-manager.service -> /usr/lib/systemd/system/kdm.service}}<br />
<br />
=== Using systemd-logind ===<br />
<br />
In order to check the status of your user session, you can use {{ic|loginctl}}. All [[PolicyKit]] actions like suspending the system or mounting external drives will work out of the box.<br />
<br />
$ loginctl show-session $XDG_SESSION_ID<br />
<br />
== Native configuration ==<br />
<br />
{{Note|You may need to create these files. All files should have {{ic|644}} permissions and {{ic|root:root}} ownership.}}<br />
<br />
=== Virtual console ===<br />
{{Deletion|Not strictly related, trying to remove the [[#Native configuration]] section entirely.|section=Duplication of content in Native configuration section}}<br />
<br />
The virtual console (keyboard mapping, console font and console map) is configured in {{ic|/etc/vconsole.conf}} or by using the ''localectl'' tool.<br />
<br />
For more information, see [[Fonts#Console fonts|console fonts]] and [[KEYMAP|keymaps]].<br />
<br />
=== Kernel modules ===<br />
{{Deletion|Not strictly related, trying to remove the [[#Native configuration]] section entirely.|section=Duplication of content in Native configuration section}}<br />
<br />
See [[Kernel modules#Configuration]].<br />
<br />
=== Filesystem mounts ===<br />
{{Merge|File Systems|This section was added here before systemd became the default init system. Delete it after merging.|section=Duplication of content in Native configuration section}}<br />
<br />
The default setup will automatically fsck and mount filesystems before starting services that need them to be mounted. For example, systemd automatically makes sure that remote filesystem mounts like [[NFS]] or [[Samba]] are only started after the network has been set up. Therefore, local and remote filesystem mounts specified in {{ic|/etc/fstab}} should work out of the box.<br />
<br />
See {{ic|man 5 systemd.mount}} for details.<br />
<br />
==== Automount ====<br />
{{Merge|fstab|This section was added here before systemd became the default init system. Delete it after merging.|section=Duplication of content in Native configuration section}}<br />
<br />
If you have a large {{ic|/home}} partition, it might be better to allow services that do not depend on {{ic|/home}} to start while {{ic|/home}} is checked by ''fsck''. This can be achieved by adding the following options to the {{ic|/etc/fstab}} entry of your {{ic|/home}} partition:<br />
<br />
noauto,x-systemd.automount<br />
<br />
This will fsck and mount {{ic|/home}} when it is first accessed, and the kernel will buffer all file access to {{ic|/home}} until it is ready.<br />
<br />
{{Note|This will make your {{ic|/home}} filesystem type {{ic|autofs}}, which is ignored by [[mlocate]] by default. The speedup of automounting {{ic|/home}} may not be more than a second or two, depending on your system, so this trick may not be worth it.}}<br />
<br />
The same applies to remote filesystem mounts. If you want them to be mounted only upon access, you will need to use the {{ic|noauto,x-systemd.automount}} parameters. In addition, you can use the {{ic|1=x-systemd.device-timeout=#}} option to specify a timeout in case the network resource is not available.<br />
<br />
If you have encrypted filesystems with keyfiles, you can also add the {{ic|noauto}} parameter to the corresponding entries in {{ic|/etc/crypttab}}. ''systemd'' will then not open the encrypted device on boot, but instead wait until it is actually accessed and then automatically open it with the specified keyfile before mounting it. This might save a few seconds on boot if you are using an encrypted RAID device for example, because systemd does not have to wait for the device to become available. For example:<br />
<br />
{{hc|/etc/crypttab|<br />
data /dev/md0 /root/key noauto}}<br />
<br />
==== LVM ====<br />
{{Merge|LVM|This section was added here before systemd became the default init system. Delete it after merging.|section=Duplication of content in Native configuration section}}<br />
<br />
If you have [[LVM]] volumes not activated via the [[Mkinitcpio|initramfs]], [[#Using units|enable]] the '''lvm-monitoring''' service, which is provided by the {{pkg|lvm2}} package.<br />
<br />
=== ACPI power management ===<br />
{{Deletion|Not strictly related, trying to remove the [[#Native configuration]] section entirely.|section=Duplication of content in Native configuration section}}<br />
<br />
See [[Power Management]].<br />
<br />
=== Temporary files ===<br />
{{Moveto|Systemd#|Trying to remove the [[#Native configuration]] section entirely, this section can easily become a top-level one like [[#Journal]].|section=Duplication of content in Native configuration section}}<br />
<br />
"'''systemd-tmpfiles''' creates, deletes and cleans up volatile and temporary files and directories." It reads configuration files in {{ic|/etc/tmpfiles.d/}} and {{ic|/usr/lib/tmpfiles.d/}} to discover which actions to perform. Configuration files in the former directory take precedence over those in the latter directory.<br />
<br />
Configuration files are usually provided together with service files, and they are named in the style of {{ic|/usr/lib/tmpfiles.d/''program''.conf}}. For example, the [[Samba]] daemon expects the directory {{ic|/run/samba}} to exist and to have the correct permissions. Therefore, the {{Pkg|samba}} package ships with this configuration:<br />
<br />
{{hc|/usr/lib/tmpfiles.d/samba.conf|<br />
D /run/samba 0755 root root}}<br />
<br />
Configuration files may also be used to write values into certain files on boot. For example, if you used {{ic|/etc/rc.local}} to disable wakeup from USB devices with {{ic|echo USBE > /proc/acpi/wakeup}}, you may use the following tmpfile instead:<br />
<br />
{{hc|/etc/tmpfiles.d/disable-usb-wake.conf|<br />
w /proc/acpi/wakeup - - - - USBE}}<br />
<br />
See the {{ic|systemd-tmpfiles}} and {{ic|tmpfiles.d(5)}} man pages for details.<br />
<br />
{{Note|This method may not work to set options in {{ic|/sys}} since the ''systemd-tmpfiles-setup'' service may run before the appropriate device modules is loaded. In this case you could check whether the module has a parameter for the option you want to set with {{ic|modinfo ''module''}} and set this option with a [[Modprobe.d#Configuration|config file in /etc/modprobe.d]]. Otherwise you will have to write a [[Udev#About_udev_rules|udev rule]] to set the appropriate attribute as soon as the device appears.}}<br />
<br />
== Writing custom .service files ==<br />
<br />
The syntax of systemd's [[#Using units|unit files]] is inspired by XDG Desktop Entry Specification .desktop files, which are in turn inspired by Microsoft Windows .ini files.<br />
<br />
See [[systemd/Services]] for more examples.<br />
<br />
=== Handling dependencies ===<br />
<br />
With ''systemd'', dependencies can be resolved by designing the unit files correctly. The most typical case is that the unit ''A'' requires the unit ''B'' to be running before ''A'' is started. In that case add {{ic|1=Requires=''B''}} and {{ic|1=After=''B''}} to the {{ic|[Unit]}} section of ''A''. If the dependency is optional, add {{ic|1=Wants=''B''}} and {{ic|1=After=''B''}} instead. Note that {{ic|1=Wants=}} and {{ic|1=Requires=}} do not imply {{ic|1=After=}}, meaning that if {{ic|1=After=}} is not specified, the two units will be started in parallel.<br />
<br />
Dependencies are typically placed on services and not on targets. For example, ''network.target'' is pulled in by whatever service configures your network interfaces, therefore ordering your custom unit after it is sufficient since ''network.target'' is started anyway.<br />
<br />
=== Type ===<br />
<br />
There are several different start-up types to consider when writing a custom service file. This is set with the {{ic|1=Type=}} parameter in the {{ic|[Service]}} section. See {{ic|man systemd.service}} for a more detailed explanation.<br />
<br />
* {{ic|1=Type=simple}} (default): ''systemd'' considers the service to be started up immediately. The process must not fork. Do not use this type if other services need to be ordered on this service, unless it is socket activated.<br />
* {{ic|1=Type=forking}}: ''systemd'' considers the service started up once the process forks and the parent has exited. For classic daemons use this type unless you know that it is not necessary. You should specify {{ic|1=PIDFile=}} as well so ''systemd'' can keep track of the main process.<br />
* {{ic|1=Type=oneshot}}: this is useful for scripts that do a single job and then exit. You may want to set {{ic|1=RemainAfterExit=yes}} as well so that ''systemd'' still considers the service as active after the process has exited.<br />
* {{ic|1=Type=notify}}: identical to {{ic|1=Type=simple}}, but with the stipulation that the daemon will send a signal to ''systemd'' when it is ready. The reference implementation for this notification is provided by ''libsystemd-daemon.so''.<br />
* {{ic|1=Type=dbus}}: the service is considered ready when the specified {{ic|BusName}} appears on DBus's system bus.<br />
<br />
=== Editing provided unit files ===<br />
<br />
To edit a unit file provided by a package, you can create a directory called {{ic|/etc/systemd/system/''unit''.d/}} for example {{ic|/etc/systemd/system/httpd.service.d/}} and place ''*.conf'' files in there to override or add new options. ''systemd'' will parse these ''*.conf'' files and apply them on top of the original unit. For example, if you simply want to add an additional dependency to a unit, you may create the following file:<br />
<br />
{{hc|/etc/systemd/system/''unit''.d/customdependency.conf|2=<br />
[Unit]<br />
Requires=''new dependency''<br />
After=''new dependency''}}<br />
<br />
Then run the following for your changes to take effect:<br />
<br />
# systemctl daemon-reload<br />
# systemctl restart ''unit''<br />
<br />
Alternatively you can copy the old unit file from {{ic|/usr/lib/systemd/system/}} to {{ic|/etc/systemd/system/}} and make your changes there. A unit file in {{ic|/etc/systemd/system/}} always overrides the same unit in {{ic|/usr/lib/systemd/system/}}. Note that when the original unit in {{ic|/usr/lib/}} is changed due to a package upgrade, these changes will not automatically apply to your custom unit file in {{ic|/etc/}}. Additionally you will have to manually reenable the unit with {{ic|systemctl reenable ''unit''}}. It is therefore recommended to use the ''*.conf'' method described before instead.<br />
<br />
{{Tip|You can use '''systemd-delta''' to see which unit files have been overridden and what exactly has been changed.}}<br />
<br />
As the provided unit files will be updated from time to time, use ''systemd-delta'' for system maintenance.<br />
<br />
=== Syntax highlighting for units within Vim ===<br />
<br />
Syntax highlighting for ''systemd'' unit files within [[Vim]] can be enabled by installing {{Pkg|vim-systemd}} from the [[Official Repositories|official repositories]].<br />
<br />
== Targets ==<br />
<br />
''systemd'' uses ''targets'' which serve a similar purpose as runlevels but act a little different. Each ''target'' is named instead of numbered and is intended to serve a specific purpose with the possibility of having multiple ones active at the same time. Some ''target''s are implemented by inheriting all of the services of another ''target'' and adding additional services to it. There are ''systemd'' ''target''s that mimic the common SystemVinit runlevels so you can still switch ''target''s using the familiar {{ic|telinit RUNLEVEL}} command.<br />
<br />
=== Get current targets ===<br />
<br />
The following should be used under ''systemd'' instead of running {{ic|runlevel}}:<br />
<br />
$ systemctl list-units --type=target<br />
<br />
=== Create custom target ===<br />
<br />
The runlevels that are assigned a specific purpose on vanilla Fedora installs; 0, 1, 3, 5, and 6; have a 1:1 mapping with a specific ''systemd'' ''target''. Unfortunately, there is no good way to do the same for the user-defined runlevels like 2 and 4. If you make use of those it is suggested that you make a new named ''systemd'' ''target'' as {{ic|/etc/systemd/system/''your target''}} that takes one of the existing runlevels as a base (you can look at {{ic|/usr/lib/systemd/system/graphical.target}} as an example), make a directory {{ic|/etc/systemd/system/''your target''.wants}}, and then symlink the additional services from {{ic|/usr/lib/systemd/system/}} that you wish to enable.<br />
<br />
=== Targets table ===<br />
<br />
{| border="1"<br />
! SysV Runlevel !! systemd Target !! Notes<br />
|-<br />
| 0 || runlevel0.target, poweroff.target || Halt the system.<br />
|-<br />
| 1, s, single || runlevel1.target, rescue.target || Single user mode.<br />
|-<br />
| 2, 4 || runlevel2.target, runlevel4.target, multi-user.target || User-defined/Site-specific runlevels. By default, identical to 3.<br />
|-<br />
| 3 || runlevel3.target, multi-user.target || Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.<br />
|-<br />
| 5 || runlevel5.target, graphical.target || Multi-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.<br />
|-<br />
| 6 || runlevel6.target, reboot.target || Reboot<br />
|-<br />
| emergency || emergency.target || Emergency shell<br />
|-<br />
|}<br />
<br />
=== Change current target ===<br />
<br />
In ''systemd'' targets are exposed via ''target units''. You can change them like this:<br />
<br />
# systemctl isolate graphical.target<br />
<br />
This will only change the current target, and has no effect on the next boot. This is equivalent to commands such as {{ic|telinit 3}} or {{ic|telinit 5}} in Sysvinit.<br />
<br />
=== Change default target to boot into ===<br />
<br />
The standard target is ''default.target'', which is aliased by default to ''graphical.target'' (which roughly corresponds to the old runlevel 5). To change the default target at boot-time, append one of the following [[kernel parameters]] to your bootloader:<br />
<br />
{{Tip|The ''.target'' extension can be left out.}}<br />
<br />
* {{ic|1=systemd.unit=multi-user.target}} (which roughly corresponds to the old runlevel 3),<br />
* {{ic|1=systemd.unit=rescue.target}} (which roughly corresponds to the old runlevel 1).<br />
<br />
Alternatively, you may leave the bootloader alone and change ''default.target''. This can be done using ''systemctl'':<br />
<br />
# systemctl enable multi-user.target<br />
<br />
The effect of this command is output by ''systemctl''; a symlink to the new default target is made at {{ic|/etc/systemd/system/default.target}}. This works if, and only if:<br />
<br />
[Install]<br />
Alias=default.target<br />
<br />
is in the target's configuration file. Currently, ''multi-user.target'' and ''graphical.target'' both have it.<br />
<br />
== Timers ==<br />
<br />
Systemd can replace cron functionality to a great extent. For further information, please refer to [[systemd/cron functionality]].<br />
<br />
== Journal ==<br />
<br />
''systemd'' has its own logging system called the journal; therefore, running a syslog daemon is no longer required. To read the log, use:<br />
<br />
# journalctl<br />
<br />
As in Arch Linux the directory {{ic|/var/log/journal/}} is part of the ''systemd'' package, the journal (when {{ic|1=Storage=}} is set to {{ic|auto}} in {{ic|/etc/systemd/journald.conf}}) will write to {{ic|/var/log/journal/}}. If you or some program delete that directory, systemd will '''not''' recreate it automatically; however, it will be recreated during the next update of the systemd package. Until then, logs will be written to {{ic|/run/systemd/journal}}, and logs will be lost on reboot.<br />
<br />
{{Tip|If {{ic|/var/log/journal/}} resides in a [[btrfs]] filesystem you should consider disabling [[Btrfs#Copy-On-Write_.28CoW.29|Copy-on-Write]] for the directory:<br />
# chattr +C /var/log/journal<br />
}}<br />
<br />
=== Filtering output ===<br />
<br />
''journalctl'' allows you to filter the output by specific fields.<br />
<br />
Examples:<br />
<br />
Show all messages from this boot:<br />
<br />
# journalctl -b<br />
<br />
However, often one is interested in messages not from the current, but from the previous boot (e.g. if an unrecoverable system crash happened). Currently, this feature is not implemented, though there was a discussion at [http://comments.gmane.org/gmane.comp.sysutils.systemd.devel/6608 systemd-devel@lists.freedesktop.org] (September/October 2012).<br />
<br />
As a workaround you can use at the moment:<br />
<br />
# journalctl --since=today | tac | sed -n '/-- Reboot --/{n;:r;/-- Reboot --/q;p;n;b r}' | tac<br />
<br />
provided, that the previous boot happened today. Be aware that, if there are many messages for the current day, the output of this command can be delayed for quite some time.<br />
{{note|This needs to be corrected once 206 lands. {{ic|journalctl -b}} now takes arguments such as {{ic|-0}} for the last boot or a boot id. E.g. {{ic|journalctl -b -3}} will show all messages from the fourth to last boot.}}<br />
<br />
Follow new messages:<br />
<br />
# journalctl -f<br />
<br />
Show all messages by a specific executable:<br />
<br />
# journalctl /usr/lib/systemd/systemd<br />
<br />
Show all messages by a specific process:<br />
<br />
# journalctl _PID=1<br />
<br />
Show all messages by a specific unit:<br />
<br />
# journalctl -u netcfg<br />
<br />
Show kernel ring buffer:<br />
<br />
# journalctl _TRANSPORT=kernel<br />
<br />
See {{ic|man 1 journalctl}}, {{ic|man 7 systemd.journal-fields}}, or Lennert's [http://0pointer.de/blog/projects/journalctl.html blog post] for details.<br />
<br />
=== Journal size limit ===<br />
<br />
If the journal is persistent (non-volatile), its size limit is set to a default value of 10% of the size of the respective file system. For example, with {{ic|/var/log/journal}} located on a 50 GiB root partition this would lead to 5 GiB of journal data. The maximum size of the persistent journal can be controlled by {{ic|SystemMaxUse}} in {{ic|/etc/systemd/journald.conf}}, so to limit it for example to 50 MiB uncomment and edit the corresponding line to:<br />
<br />
SystemMaxUse=50M<br />
<br />
Refer to {{ic|man journald.conf}} for more info.<br />
<br />
=== Journald in conjunction with syslog ===<br />
<br />
Compatibility with classic syslog implementations is provided via a socket {{ic|/run/systemd/journal/syslog}}, to which all messages are forwarded. To make the syslog daemon work with the journal, it has to bind to this socket instead of {{ic|/dev/log}} ([http://lwn.net/Articles/474968/ official announcement]). The {{Pkg|syslog-ng}} package in the repositories automatically provides the necessary configuration.<br />
<br />
# systemctl enable syslog-ng<br />
<br />
A good ''journalctl'' tutorial is [http://0pointer.de/blog/projects/journalctl.html here].<br />
<br />
== Troubleshooting ==<br />
<br />
=== Investigating systemd errors ===<br />
<br />
As an example, we will investigate an error with {{ic|systemd-modules-load}} service:<br />
<br />
1. Lets find the systemd services which fail to start:<br />
$ systemctl | grep -i failed<br />
systemd-modules-load.service loaded '''failed failed''' Load Kernel Modules<br />
<br />
2. Ok, we found a problem with {{ic|systemd-modules-load}} service. We want to know more:<br />
$ systemctl status systemd-modules-load<br />
systemd-modules-load.service - Load Kernel Modules<br />
Loaded: loaded (/usr/lib/systemd/system/systemd-modules-load.service; static)<br />
Active: '''failed''' (Result: exit-code) since So 2013-08-25 11:48:13 CEST; 32s ago<br />
Docs: man:systemd-modules-load.service(8).<br />
man:modules-load.d(5)<br />
Process: '''15630''' ExecStart=/usr/lib/systemd/systemd-modules-load ('''code=exited, status=1/FAILURE''')<br />
<br />
3. Now we have the process id (PID) to investigate this error in depth. Enter the following command with the current {{ic|Process ID}} (here: 15630):<br />
$ journalctl -b _PID=15630<br />
-- Logs begin at Sa 2013-05-25 10:31:12 CEST, end at So 2013-08-25 11:51:17 CEST. --<br />
Aug 25 11:48:13 mypc systemd-modules-load[15630]: '''Failed to find module 'blacklist usblp''''<br />
Aug 25 11:48:13 mypc systemd-modules-load[15630]: '''Failed to find module 'install usblp /bin/false'''' <br />
<br />
4. We see that some of the kernel module configs have wrong settings. Therefore we have a look at these settings in {{ic|/etc/modules-load.d/}}:<br />
$ ls -al /etc/modules-load.d/<br />
total 44<br />
drwxr-xr-x 2 root root 4096 14. Jul 11:01 .<br />
drwxr-xr-x 114 root root 12288 25. Aug 11:40 ..<br />
-rw-r--r-- 1 root root 79 1. Dez 2012 blacklist.conf<br />
-rw-r--r-- 1 root root 1 2. Mär 14:30 encrypt.conf<br />
-rw-r--r-- 1 root root 3 5. Dez 2012 printing.conf<br />
-rw-r--r-- 1 root root 6 14. Jul 11:01 realtek.conf<br />
-rw-r--r-- 1 root root 65 2. Jun 23:01 virtualbox.conf<br />
<br />
5. The {{ic|Failed to find module 'blacklist usblp'}} error message might be related to a wrong setting inside of {{ic|blacklist.conf}}. Lets deactivate it with inserting a trailing '''#''' before each option we found via step 3:<br />
$ nano /etc/modules-load.d/blacklist.conf<br />
'''#''' blacklist usblp<br />
'''#''' install usblp /bin/false<br />
<br />
6. Now, try to start {{ic|systemd-modules-load}}:<br />
$ systemctl start systemd-modules-load.service<br />
If it was successful, this shouldn't prompt anything. If you see any error, go back to step 3. and use the new PID for solving the errors left.<br />
<br />
If everything is ok, you can verify that the service was started successfully with:<br />
$ systemctl status systemd-modules-load<br />
systemd-modules-load.service - Load Kernel Modules<br />
Loaded: '''loaded''' (/usr/lib/systemd/system/systemd-modules-load.service; static)<br />
Active: '''active (exited)''' since So 2013-08-25 12:22:31 CEST; 34s ago<br />
Docs: man:systemd-modules-load.service(8)<br />
man:modules-load.d(5)<br />
Process: 19005 ExecStart=/usr/lib/systemd/systemd-modules-load (code=exited, status=0/SUCCESS)<br />
Aug 25 12:22:31 mypc systemd[1]: '''Started Load Kernel Modules'''.<br />
<br />
Often you can solve these kind of problems like shown above. For further investigation look at the following caption "'''Diagnosing boot problems'''"<br />
<br />
=== Diagnosing boot problems ===<br />
<br />
Boot with these parameters on the kernel command line:<br />
{{ic|<nowiki>systemd.log_level=debug systemd.log_target=kmsg log_buf_len=1M</nowiki>}}<br />
<br />
[http://freedesktop.org/wiki/Software/systemd/Debugging More Debugging Information]<br />
<br />
=== Shutdown/reboot takes terribly long ===<br />
<br />
If the shutdown process takes a very long time (or seems to freeze) most likely a service not exiting is to blame. ''systemd'' waits some time for each service to exit before trying to kill it. To find out if you are affected, see [http://freedesktop.org/wiki/Software/systemd/Debugging/#shutdowncompleteseventually this article].<br />
<br />
=== Short lived processes do not seem to log any output ===<br />
<br />
If {{ic|journalctl -u foounit}} does not show any output for a short lived service, look at the PID instead. For example, if {{ic|systemd-modules-load.service}} fails, and {{ic|systemctl status systemd-modules-load}} shows that it ran as PID 123, then you might be able to see output in the journal for that PID, i.e. {{ic|journalctl -b _PID&#61;123}}. Metadata fields for the journal such as _SYSTEMD_UNIT and _COMM are collected asynchronously and rely on the {{ic|/proc}} directory for the process existing. Fixing this requires fixing the kernel to provide this data via a socket connection, similar to SCM_CREDENTIALS.<br />
<br />
=== Disabling application crash dumps journaling ===<br />
<br />
Run the following in order to overwrite the settings from {{ic|/lib/sysctl.d/}}:<br />
# ln -s /dev/null /etc/sysctl.d/50-coredump.conf<br />
# sysctl kernel.core_pattern=core<br />
<br />
This will disable logging of coredumps to the journal.<br />
<br />
Note that the default RLIMIT_CORE of 0 means that no core files are written, either.<br />
If you want them, you also need to "unlimit" the core file size in the shell:<br />
$ ulimit -c unlimited<br />
<br />
See [http://www.freedesktop.org/software/systemd/man/sysctl.d.html sysctl.d] and [https://www.kernel.org/doc/Documentation/sysctl/kernel.txt the documentation for /proc/sys/kernel] for more information.<br />
<br />
== See also ==<br />
<br />
*[http://www.freedesktop.org/wiki/Software/systemd Official web site]<br />
*[[Wikipedia:systemd|Wikipedia article]]<br />
*[http://0pointer.de/public/systemd-man/ Manual pages]<br />
*[http://freedesktop.org/wiki/Software/systemd/Optimizations systemd optimizations]<br />
*[http://www.freedesktop.org/wiki/Software/systemd/FrequentlyAskedQuestions FAQ]<br />
*[http://www.freedesktop.org/wiki/Software/systemd/TipsAndTricks Tips and tricks]<br />
*[http://0pointer.de/public/systemd-ebook-psankar.pdf systemd for Administrators (PDF)]<br />
*[http://fedoraproject.org/wiki/Systemd About systemd on Fedora Project]<br />
*[http://fedoraproject.org/wiki/How_to_debug_Systemd_problems How to debug systemd problems]<br />
*[http://www.h-online.com/open/features/Control-Centre-The-systemd-Linux-init-system-1565543.html Two] [http://www.h-online.com/open/features/Booting-up-Tools-and-tips-for-systemd-1570630.html part] introductory article in ''The H Open'' magazine.<br />
*[http://0pointer.de/blog/projects/systemd.html Lennart's blog story]<br />
*[http://0pointer.de/blog/projects/systemd-update.html Status update]<br />
*[http://0pointer.de/blog/projects/systemd-update-2.html Status update2]<br />
*[http://0pointer.de/blog/projects/systemd-update-3.html Status update3]<br />
*[http://0pointer.de/blog/projects/why.html Most recent summary]<br />
*[http://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet Fedora's SysVinit to systemd cheatsheet]<br />
*[[Allow Users to Shutdown|Configuring systemd to allow normal users to shutdown]]</div>Friesofthttps://wiki.archlinux.org/index.php?title=KDM&diff=278566KDM2013-10-13T11:54:38Z<p>Friesoft: </p>
<hr />
<div>[[Category:KDE]][[Category:Display managers]]<br />
[[cs:KDM]]<br />
[[it:KDM]]<br />
[[ru:KDM]]<br />
{{Article summary start}}<br />
{{Article summary text|Provides an overview of the default display manager for the KDE.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Display Manager}}<br />
{{Article summary wiki|KDE}}<br />
{{Article summary end}}<br />
<br />
== Introduction ==<br />
KDM (KDE Display Manager) is the login manager of [[KDE]]. It supports themes, auto-logging, session type choice, and numerous other features.<br />
<br />
== Installation ==<br />
Install the {{Pkg|kdebase-workspace}} package:<br />
# pacman -S kdebase-workspace<br />
<br />
== Configuration ==<br />
The configuration file for KDM can be found at {{ic|/usr/share/config/kdm/kdmrc}}. See {{ic|/usr/share/doc/HTML/en/kdm/kdmrc-ref.docbook}} for all options.<br />
<br />
You can visit '''System Settings > Login Screen''' and make your changes. Whenever you press "Apply", a '''KDE Polkit authorization''' window appears which will ask you to give your root password in order to finish the changes.<br />
<br />
If you seem not to be able to edit KDM's settings when launching System Settings as user, you can use kdesu: <br />
$ kdesu kcmshell4 kdm<br />
<br />
In the pop-up kdesu window, enter your root password and wait for System Settings to be launched. Then go to Login Screen.<br />
<br />
{{Note| Since you have launched it as root, be careful when changing your settings. All settings configuration in root-launched System Settings are saved under {{ic|/root/.kde4}} and not under {{ic|~/.kde4}} (your home location).}}<br />
<br />
=== Themes ===<br />
Arch Linux KDM themes can be installed with:<br />
<br />
# pacman -S archlinux-themes-kdm<br />
<br />
Many other KDM 4 themes are available at http://kde-look.org/index.php?xcontentmode=41.<br />
Choose between the installed themes in System Settings (run as root) as described above.<br />
<br />
=== Themes creation ===<br />
Themes files are in {{ic|/usr/share/apps/kdm/themes}}.<br />
<br />
The theme format is the same one as GDM, a documentation can be found here: [http://projects.gnome.org//gdm/docs/2.18/thememanual.html#descofthemeformat Detailed Description of Theme XML format].<br />
<br />
==== ServerArgsLocal ====<br />
To force the number of dots per inch of the X server, add a -dpi option to ServerArgsLocal. A commonly used value is 96 dpi.<br />
<br />
{{hc|/usr/share/config/kdm/kdmrc|<br />
2=[...]<br />
ServerArgsLocal=-dpi 96 -nolisten tcp<br />
[...]<br />
}}<br />
<br />
==== Allow Root login ====<br />
To allow root login in KDM do:<br />
<br />
# sed -ie 's/AllowRootLogin=false/AllowRootLogin=true/' /usr/share/config/kdm/kdmrc<br />
<br />
==== SessionsDirs ====<br />
This variable stores a list of directories containing session type definitions in {{ic|.desktop}} format, ordered by falling priority. In Arch Linux some [[Window_Manager|window managers]] install such files in {{ic|/usr/share/xsessions}}. Add that to the list in order to be able to select them in KDM:<br />
<br />
{{hc|/usr/share/config/kdm/kdmrc|<br />
2=[...]<br />
SessionsDirs=/usr/share/config/kdm/sessions,/usr/share/apps/kdm/sessions,/usr/share/xsessions<br />
[...]<br />
}}<br />
<br />
==== Session ====<br />
The Session variable is the name of a program which is run as the user who logs in.<br />
It is supposed to interpret the session argument (see SessionsDirs) and start the<br />
session as desired for that argument. One may wish to customize this for window manager<br />
sessions, for example to set a wallpaper and start a screensaver. To do this in a way which<br />
will survive pacman updates (which clobber Xsession) do as follows:<br />
# cp /usr/share/config/kdm/Xsession /usr/share/config/kdm/Xsession.custom<br />
In {{ic|kdmrc}} set:<br />
{{hc|/usr/share/config/kdm/kdmrc|<br />
2=[...]<br />
Session=/usr/share/config/kdm/Xsession.custom<br />
[...]<br />
}}<br />
And then edit {{ic|Xsession.custom}} as desired.<br />
<br />
==== Restart X server menu option ====<br />
To allow users to restart the X server from KDM, edit this option in {{ic|kdmrc}}:<br />
{{hc|/usr/share/config/kdm/kdmrc|<br />
2=<br />
[X-:*-Greeter]<br />
[...]<br />
# Show the "Restart X Server"/"Close Connection" action in the greeter.<br />
# Default is true<br />
AllowClose=true<br />
[...]<br />
}}<br />
This feature will be available in the menu drop-down options. The option also includes a hotkey of {{ic|Alt+E}}.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Keyboard maps ===<br />
KDM keyboard keymap can be set through the configuration system (login screen section).<br />
<br />
If setting the language in the configuration system does not affect keyboard map, you can try to edit /usr/share/config/kdm/Xsetup and add command:<br />
setxkbmap cz<br />
where cz is the Czech keyboard layout. Since the file may get overwritten on the next upgrade, you may want to protect it in /etc/pacman.conf:<br />
NoUpgrade = usr/share/config/kdm/Xsetup<br />
<br />
NB that the leading slash / must not be there.<br />
<br />
=== Slow KDM Start ===<br />
If KDM is taking a long time to display the login screen (e.g. 15-30 seconds) you may try rebuilding the X font caches:<br />
# fc-cache -fv</div>Friesofthttps://wiki.archlinux.org/index.php?title=Libvirt&diff=107064Libvirt2010-05-23T18:33:12Z<p>Friesoft: /* Building libvirt for xen */</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
<br />
libvirt is an abstraction layer and a daemon for managing virtual machines -- remote or locally, using multiple virtualization backends (QEMU/KVM, VirtualBox, Xen, etc).<br />
<br />
This entry doesn't try to cover everything about libvirt, just the things that were not intuitive at first or not well documented.<br />
<br />
=Installing=<br />
Currently libvirt and tools are only available from AUR. If you are unfamiliar with how to install AUR packages, see: [[Arch User Repository]].<br />
<br />
For servers you need the [http://aur.archlinux.org/packages.php?ID=32467 libvirt] package from AUR and urlgrabber, qemu-kvm, dnsmasq and bridge-utils from Arch repositories:<br />
<br />
pacman -S urlgrabber qemu-kvm dnsmasq bridge-utils<br />
<br />
For GUI management tools you also need all of the following from AUR: [http://aur.archlinux.org/packages.php?ID=15477 virtviewer] [http://aur.archlinux.org/packages.php?ID=15459 virtinst] [http://aur.archlinux.org/packages.php?ID=15461 virt-manager]<br />
<br />
==Building libvirt for xen==<br />
The PKGBUILD for libvirt-git on the AUR currently disables xen with the "--with-out xen" flag during the make process. If you want to use libvirt for managing xen, you'll need to re-enable it. Furthermore you need to make sure you have [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl] installed.<br />
<br />
The alternative XenAPI driver is lacking a package atm?? (23.5.2010, friesoft)<br />
<br />
=Configuration=<br />
<br />
==Run daemon==<br />
To run the libvirt daemon:<br />
<br />
sudo /etc/rc.d/libvirtd start<br />
<br />
If you want to start it at boot, edit "<tt>/etc/rc.conf</tt>" and add <tt>libvirtd</tt> to the <tt>DAEMONS=</tt> line.<br />
<br />
==Polkit authentication==<br />
{{Note | ??? I never got this to work. If you know how to do it, please edit this section}}<br />
<br />
To allow yourself to manage VMs as non-root, run this on the server:<br />
<br />
sudo polkit-auth --user $USERNAME --grant org.libvirt.unix.manage<br />
<br />
Alternatively you can only grant the monitoring rights with <tt>org.libvirt.unix.monitor</tt><br />
<br />
If logging in through ssh you will need to make sure ConsoleKit is used. Place the following in ''/etc/pam.d/sshd'':<br />
<br />
'''session optional pam_ck_connector.so'''<br />
<br />
==Unix File-based Permissions==<br />
{{Note | This is an alternative to Polkit authentication.}}<br />
If you wish to use unix file-based permissions to allow some non-root users to use libvirt, you can modify the config files.<br />
<br />
First you will need to create the libvirt group and add any users you want to have access to libvirt to that group. <br />
<br />
sudo groupadd libvirt<br />
sudo gpasswd -a [user] libvirt<br />
<br />
Any users that are currently logged in will need to log out and back in to update their groups. Alternately the user can use the following command in the shell they will be launching libvirt from to update the group:<br />
<br />
newgrp libvirt<br />
<br />
Then you can either enable permissions-based access by uncommenting the following line on the PKGBUILD for libvirt before running makepkg:<br />
<br />
# patch -Np1 -i "$srcdir"/unixperms.patch || return 1<br />
<br />
or you can make the changes to your permissions and config files by hand. Uncomment the following lines in the file /etc/libvirt/libvirtd.conf (they are not all in the same location in the file):<br />
<br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777"<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
<br />
You may also wish to change unix_sock_ro_perms from "0777" to "0770" to disallow read-only access to people who are not members of the libvirt group.<br />
<br />
==Enable KVM acceleration for QEMU==<br />
{{Note | KVM will conflict with VirtualBox. You cannot use KVM and VirtualBox at the same time.}}<br />
<br />
Running virtual machines with the usual QEMU emulation, without KVM, will be '''painfully slow'''. You definitely want to enable KVM support if your CPU supports it. To find out, run the following:<br />
<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo<br />
<br />
To enable KVM, you need to load the <tt>kvm-amd</tt> or <tt>kvm-intel</tt> kernel module depending on your CPU. Run modprobe:<br />
<br />
sudo modprobe kvm-amd<br />
<br />
Usually you would also add it to the <tt>MODULES=</tt> line in "<tt>/etc/rc.conf</tt>"<br />
<br />
If KVM is '''not''' working, you will find the following message in your "<tt>/var/log/libvirt/qemu/VIRTNAME.log</tt>"<br />
<br />
Could not initialize KVM, will disable KVM support<br />
<br />
More info is available from the [http://www.linux-kvm.org/page/FAQ official KVM FAQ]<br />
<br />
=Usage=<br />
<br />
==Installing new VM==<br />
To create a new VM, you need some sort of installation media, which is usually a plain <tt>.iso</tt> file. Copy it to the "<tt>/var/lib/libvirt/images</tt>" directory (alternatively you can create a new ''storage pool'' directory in virt-manager and copy it there)<br />
<br />
Then run virt-manager, connect to the server, right click on the connection and choose '''New'''. Choose a name, and select '''Local install media'''. Just continue with the wizard.<br />
<br />
On the '''4th step''', you may want to uncheck ''Allocate entire disk now'' -- this way you will save space when your VM isn't using all of its disk. However, this can cause increased fragmentation of the disk.<br />
<br />
On the '''5th step''', open '''Advanced options''' and make sure that ''Virt Type'' is set to '''kvm'''. If the kvm choice is not available, see section [[#Enable KVM acceleration for QEMU|Enable KVM acceleration for QEMU]] above.<br />
<br />
==Creating a storage pool in virt-manager==<br />
First, connect to an existing server. Once you're there, right click and choose '''Details'''. Go to '''Storage''' and press the '''+''' icon at the lower left. Then just follow the wizard. :)<br />
<br />
==Using VirtualBox with virt-manager==<br />
{{Note | VirtualBox support in libvirt is not quite stable yet and may cause your libvirtd to crash. Usually this is harmless and everything will be back once you restart the daemon. }}<br />
<br />
virt-manager does not let you to add any VirtualBox connections from the GUI. However, you can launch it from the command line:<br />
<br />
virt-manager -c vbox:///system<br />
<br />
Or if you want to manage a remote system over SSH:<br />
<br />
virt-manager -c vbox+ssh://username@host/system<br />
<br />
=Remote access to libvirt=<br />
<br />
==Using unencrypted TCP/IP socket (most simple, least secure)==<br />
{{Note | Only for testing or use over a trusted network}}<br />
<br />
Edit /etc/libvirt/libvirtd.conf :<br />
<pre><br />
listen_tcp = 1<br />
auth_tcp=none<br />
</pre><br />
<br />
{{Note | We don't enable SASL here, all TCP traffic is cleartext ! For real world use, always enable SASL.}}<br />
<br />
It is also necessary to start the server in listening mode by editing /etc/conf.d/libvirtd <br />
<pre><br />
LIBVIRTD_ARGS="--listen"<br />
</pre><br />
<br />
==using SSH==<br />
The nc utility is needed for remote management over SSH<br />
pacman -S openbsd-netcat<br />
ln -s /usr/bin/nc.openbsd /usr/bin/nc<br />
<br />
To connect to the remote system using virsh :<br />
virsh -c qemu+ssh://username@host/system<br />
<br />
If something goes wrong, you can get some logs using :<br />
LIBVIRT_DEBUG=1 virsh -c qemu+ssh://username@host/system<br />
<br />
To display the graphical console for a virtual machine :<br />
virt-viewer --connect qemu+ssh://username@host/system myvm<br />
<br />
To display the virtual machine desktop management tool :<br />
virt-manager -c qemu+ssh://username@host/system</div>Friesofthttps://wiki.archlinux.org/index.php?title=Libvirt&diff=107062Libvirt2010-05-23T18:04:22Z<p>Friesoft: /* Installing */</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
<br />
<br />
libvirt is an abstraction layer and a daemon for managing virtual machines -- remote or locally, using multiple virtualization backends (QEMU/KVM, VirtualBox, Xen, etc).<br />
<br />
This entry doesn't try to cover everything about libvirt, just the things that were not intuitive at first or not well documented.<br />
<br />
=Installing=<br />
Currently libvirt and tools are only available from AUR. If you are unfamiliar with how to install AUR packages, see: [[Arch User Repository]].<br />
<br />
For servers you need the [http://aur.archlinux.org/packages.php?ID=32467 libvirt] package from AUR and urlgrabber, qemu-kvm, dnsmasq and bridge-utils from Arch repositories:<br />
<br />
pacman -S urlgrabber qemu-kvm dnsmasq bridge-utils<br />
<br />
For GUI management tools you also need all of the following from AUR: [http://aur.archlinux.org/packages.php?ID=15477 virtviewer] [http://aur.archlinux.org/packages.php?ID=15459 virtinst] [http://aur.archlinux.org/packages.php?ID=15461 virt-manager]<br />
<br />
==Building libvirt for xen==<br />
The PKGBUILD for libvirt on the AUR currently disables xen with the "--with-out xen" flag during the make process. If you want to use libvirt for managing xen, you'll need to re-enable it and make sure that you have the xen hypervisor tools installed (The AUR package xen-hv-tools provides xen and its tools without the dom0 and domU kernels).<br />
<br />
=Configuration=<br />
<br />
==Run daemon==<br />
To run the libvirt daemon:<br />
<br />
sudo /etc/rc.d/libvirtd start<br />
<br />
If you want to start it at boot, edit "<tt>/etc/rc.conf</tt>" and add <tt>libvirtd</tt> to the <tt>DAEMONS=</tt> line.<br />
<br />
==Polkit authentication==<br />
{{Note | ??? I never got this to work. If you know how to do it, please edit this section}}<br />
<br />
To allow yourself to manage VMs as non-root, run this on the server:<br />
<br />
sudo polkit-auth --user $USERNAME --grant org.libvirt.unix.manage<br />
<br />
Alternatively you can only grant the monitoring rights with <tt>org.libvirt.unix.monitor</tt><br />
<br />
If logging in through ssh you will need to make sure ConsoleKit is used. Place the following in ''/etc/pam.d/sshd'':<br />
<br />
'''session optional pam_ck_connector.so'''<br />
<br />
==Unix File-based Permissions==<br />
{{Note | This is an alternative to Polkit authentication.}}<br />
If you wish to use unix file-based permissions to allow some non-root users to use libvirt, you can modify the config files.<br />
<br />
First you will need to create the libvirt group and add any users you want to have access to libvirt to that group. <br />
<br />
sudo groupadd libvirt<br />
sudo gpasswd -a [user] libvirt<br />
<br />
Any users that are currently logged in will need to log out and back in to update their groups. Alternately the user can use the following command in the shell they will be launching libvirt from to update the group:<br />
<br />
newgrp libvirt<br />
<br />
Then you can either enable permissions-based access by uncommenting the following line on the PKGBUILD for libvirt before running makepkg:<br />
<br />
# patch -Np1 -i "$srcdir"/unixperms.patch || return 1<br />
<br />
or you can make the changes to your permissions and config files by hand. Uncomment the following lines in the file /etc/libvirt/libvirtd.conf (they are not all in the same location in the file):<br />
<br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777"<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
<br />
You may also wish to change unix_sock_ro_perms from "0777" to "0770" to disallow read-only access to people who are not members of the libvirt group.<br />
<br />
==Enable KVM acceleration for QEMU==<br />
{{Note | KVM will conflict with VirtualBox. You cannot use KVM and VirtualBox at the same time.}}<br />
<br />
Running virtual machines with the usual QEMU emulation, without KVM, will be '''painfully slow'''. You definitely want to enable KVM support if your CPU supports it. To find out, run the following:<br />
<br />
egrep '^flags.*(vmx|svm)' /proc/cpuinfo<br />
<br />
To enable KVM, you need to load the <tt>kvm-amd</tt> or <tt>kvm-intel</tt> kernel module depending on your CPU. Run modprobe:<br />
<br />
sudo modprobe kvm-amd<br />
<br />
Usually you would also add it to the <tt>MODULES=</tt> line in "<tt>/etc/rc.conf</tt>"<br />
<br />
If KVM is '''not''' working, you will find the following message in your "<tt>/var/log/libvirt/qemu/VIRTNAME.log</tt>"<br />
<br />
Could not initialize KVM, will disable KVM support<br />
<br />
More info is available from the [http://www.linux-kvm.org/page/FAQ official KVM FAQ]<br />
<br />
=Usage=<br />
<br />
==Installing new VM==<br />
To create a new VM, you need some sort of installation media, which is usually a plain <tt>.iso</tt> file. Copy it to the "<tt>/var/lib/libvirt/images</tt>" directory (alternatively you can create a new ''storage pool'' directory in virt-manager and copy it there)<br />
<br />
Then run virt-manager, connect to the server, right click on the connection and choose '''New'''. Choose a name, and select '''Local install media'''. Just continue with the wizard.<br />
<br />
On the '''4th step''', you may want to uncheck ''Allocate entire disk now'' -- this way you will save space when your VM isn't using all of its disk. However, this can cause increased fragmentation of the disk.<br />
<br />
On the '''5th step''', open '''Advanced options''' and make sure that ''Virt Type'' is set to '''kvm'''. If the kvm choice is not available, see section [[#Enable KVM acceleration for QEMU|Enable KVM acceleration for QEMU]] above.<br />
<br />
==Creating a storage pool in virt-manager==<br />
First, connect to an existing server. Once you're there, right click and choose '''Details'''. Go to '''Storage''' and press the '''+''' icon at the lower left. Then just follow the wizard. :)<br />
<br />
==Using VirtualBox with virt-manager==<br />
{{Note | VirtualBox support in libvirt is not quite stable yet and may cause your libvirtd to crash. Usually this is harmless and everything will be back once you restart the daemon. }}<br />
<br />
virt-manager does not let you to add any VirtualBox connections from the GUI. However, you can launch it from the command line:<br />
<br />
virt-manager -c vbox:///system<br />
<br />
Or if you want to manage a remote system over SSH:<br />
<br />
virt-manager -c vbox+ssh://username@host/system<br />
<br />
=Remote access to libvirt=<br />
<br />
==Using unencrypted TCP/IP socket (most simple, least secure)==<br />
{{Note | Only for testing or use over a trusted network}}<br />
<br />
Edit /etc/libvirt/libvirtd.conf :<br />
<pre><br />
listen_tcp = 1<br />
auth_tcp=none<br />
</pre><br />
<br />
{{Note | We don't enable SASL here, all TCP traffic is cleartext ! For real world use, always enable SASL.}}<br />
<br />
It is also necessary to start the server in listening mode by editing /etc/conf.d/libvirtd <br />
<pre><br />
LIBVIRTD_ARGS="--listen"<br />
</pre><br />
<br />
==using SSH==<br />
The nc utility is needed for remote management over SSH<br />
pacman -S openbsd-netcat<br />
ln -s /usr/bin/nc.openbsd /usr/bin/nc<br />
<br />
To connect to the remote system using virsh :<br />
virsh -c qemu+ssh://username@host/system<br />
<br />
If something goes wrong, you can get some logs using :<br />
LIBVIRT_DEBUG=1 virsh -c qemu+ssh://username@host/system<br />
<br />
To display the graphical console for a virtual machine :<br />
virt-viewer --connect qemu+ssh://username@host/system myvm<br />
<br />
To display the virtual machine desktop management tool :<br />
virt-manager -c qemu+ssh://username@host/system</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107060Xen2010-05-23T17:40:35Z<p>Friesoft: /* Hypervisor (dom0) */</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.32.13; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: currently the kernel configuration is not yet adapted to 2.6.32 so you have to run through the new parameters of 2.6.32 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked. There may also be issues with installing as this overwrites the linux-firmware in /usr/lib/firmware.'''<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Packages==<br />
As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)<br />
<br />
===Hypervisor (dom0)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0]<br />
* Userspace tools of Xen4: [xen4]<br />
Needed to compile xen4: [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl]<br />
* Userspace tools of Xen4 (conflicting with xen4): [http://aur.archlinux.org/packages.php?ID=28984 xen-hv-tools]<br />
* Userspace tools of Xen3: [http://aur.archlinux.org/packages.php?ID=14640 xen]<br />
* Some (debian) scripts for disk creation, etc: [http://aur.archlinux.org/packages.php?ID=37421 xen-tools]<br />
<br />
===Guest (domU)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* Guest utils: [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utils]<br />
<br />
===Monitoring===<br />
?? (something similar like xsconsole for xenserver available?)<br />
<br />
===Unknown===<br />
* [http://aur.archlinux.org/packages.php?ID=36373 libxen4]<br />
* [http://aur.archlinux.org/packages.php?ID=36457 libxenserver]<br />
<br />
===Unrelated packages===<br />
(e.g. for XenServer/Xen Cloud Platform)<br />
* XenServer frontend: [http://aur.archlinux.org/packages.php?ID=34398 openxencenter]<br />
* XenServer frontend svn version: [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn]<br />
* Xen Cloud Platform frontend: [http://aur.archlinux.org/packages.php?ID=36458 xvp]<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]<br />
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Talk:Xen&diff=107059Talk:Xen2010-05-23T17:38:14Z<p>Friesoft: </p>
<hr />
<div>'''<big>For maximum benefit I suggest using the discussion page for page editing collaboration.</big>'''<br />
<br />
----<br />
<br />
'''Firstly''', some things on the current page aren't really clear, imho:<br />
<br />
* "The standard arch kernel can be use to boot the domUs."<br/><br />
So, domUs can use the standard kernel?<br />
<br />
* "In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf"<br/><br />
On the domUs?<br />
<br />
* "The basic idea behind adding a domU is as follows. We must get the domU kernels (...)"<br/><br />
BUT one can also use a standard archlinux kernel? Maybe there is an inconsistency.<br />
<br />
'''Secondly''':<br />
* Maybe someone more experienced can explain how to use encrypted (LUKS) filesystems with xen;<br />
* Maybe someone more experienced can add a section on how to use other OSs like Microsoft&reg; Windows&trade.<br />
* I opt to delete the alternative installation method because experienced users can figure this stuff out for themselves and it might be horribly outdated;<br />
<br />
-- [[User:Voidzero|Voidzero]] 16:54, 7 May 2010 (EDT)<br />
<br />
'''Thirdly''':<br />
This is incorrect - was stated before to be a frontend for Xen - it is NOT - it is a frontend for xenserver by citrix<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
this is also not for xen directly - it is for [http://www.xen.org/products/cloudxen.html xen cloud platform] which is something different afaik<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
-- [[User:Friesoft|Friesoft]] 19:01, 23 May 2010 (EDT)<br />
<br />
'''Old section about build failures with dom0 kernel 2.6.31''':<br />
<br />
Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.<br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107058Xen2010-05-23T17:36:36Z<p>Friesoft: Removed note about gcc</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.32.13; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: currently the kernel configuration is not yet adapted to 2.6.32 so you have to run through the new parameters of 2.6.32 and agree to use the defaults (or whatever else you may want) - so just press enter each time you are asked. There may also be issues with installing as this overwrites the linux-firmware in /usr/lib/firmware.'''<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Packages==<br />
As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)<br />
<br />
===Hypervisor (dom0)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0]<br />
* Userspace tools of Xen4: [xen4]<br />
Needed to compile xen4: [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl]<br />
* Userspace tools of Xen4 (conflicting with xen4: [http://aur.archlinux.org/packages.php?ID=28984 xen-hv-tools]<br />
* Userspace tools of Xen3: [http://aur.archlinux.org/packages.php?ID=14640 xen]<br />
* Some (debian) scripts for disk creation, etc: [http://aur.archlinux.org/packages.php?ID=37421 xen-tools]<br />
<br />
===Guest (domU)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* Guest utils: [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utils]<br />
<br />
===Monitoring===<br />
?? (something similar like xsconsole for xenserver available?)<br />
<br />
===Unknown===<br />
* [http://aur.archlinux.org/packages.php?ID=36373 libxen4]<br />
* [http://aur.archlinux.org/packages.php?ID=36457 libxenserver]<br />
<br />
===Unrelated packages===<br />
(e.g. for XenServer/Xen Cloud Platform)<br />
* XenServer frontend: [http://aur.archlinux.org/packages.php?ID=34398 openxencenter]<br />
* XenServer frontend svn version: [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn]<br />
* Xen Cloud Platform frontend: [http://aur.archlinux.org/packages.php?ID=36458 xvp]<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]<br />
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107057Xen2010-05-23T17:33:53Z<p>Friesoft: Updated kernel version for dom0</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.32.13; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Packages==<br />
As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)<br />
<br />
===Hypervisor (dom0)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0]<br />
* Userspace tools of Xen4: [xen4]<br />
Needed to compile xen4: [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl]<br />
* Userspace tools of Xen4 (conflicting with xen4: [http://aur.archlinux.org/packages.php?ID=28984 xen-hv-tools]<br />
* Userspace tools of Xen3: [http://aur.archlinux.org/packages.php?ID=14640 xen]<br />
* Some (debian) scripts for disk creation, etc: [http://aur.archlinux.org/packages.php?ID=37421 xen-tools]<br />
<br />
===Guest (domU)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* Guest utils: [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utils]<br />
<br />
===Monitoring===<br />
?? (something similar like xsconsole for xenserver available?)<br />
<br />
===Unknown===<br />
* [http://aur.archlinux.org/packages.php?ID=36373 libxen4]<br />
* [http://aur.archlinux.org/packages.php?ID=36457 libxenserver]<br />
<br />
===Unrelated packages===<br />
(e.g. for XenServer/Xen Cloud Platform)<br />
* XenServer frontend: [http://aur.archlinux.org/packages.php?ID=34398 openxencenter]<br />
* XenServer frontend svn version: [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn]<br />
* Xen Cloud Platform frontend: [http://aur.archlinux.org/packages.php?ID=36458 xvp]<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]<br />
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107056Xen2010-05-23T17:32:09Z<p>Friesoft: Added some more packages</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Packages==<br />
As there are quite some packages available in AUR and you can have a pretty hard time figuring out what is needed here a small collection of most of the (intersting) xen packages (last updated: 23.5.2010)<br />
<br />
===Hypervisor (dom0)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0]<br />
* Userspace tools of Xen4: [xen4]<br />
Needed to compile xen4: [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl]<br />
* Userspace tools of Xen4 (conflicting with xen4: [http://aur.archlinux.org/packages.php?ID=28984 xen-hv-tools]<br />
* Userspace tools of Xen3: [http://aur.archlinux.org/packages.php?ID=14640 xen]<br />
* Some (debian) scripts for disk creation, etc: [http://aur.archlinux.org/packages.php?ID=37421 xen-tools]<br />
<br />
===Guest (domU)===<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* Guest utils: [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utils]<br />
<br />
===Monitoring===<br />
?? (something similar like xsconsole for xenserver available?)<br />
<br />
===Unknown===<br />
* [http://aur.archlinux.org/packages.php?ID=36373 libxen4]<br />
* [http://aur.archlinux.org/packages.php?ID=36457 libxenserver]<br />
<br />
===Unrelated packages===<br />
(e.g. for XenServer/Xen Cloud Platform)<br />
* XenServer frontend: [http://aur.archlinux.org/packages.php?ID=34398 openxencenter]<br />
* XenServer frontend svn version: [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn]<br />
* Xen Cloud Platform frontend: [http://aur.archlinux.org/packages.php?ID=36458 xvp]<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]<br />
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107055Xen2010-05-23T17:21:07Z<p>Friesoft: Added packages</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Packages==<br />
Last update: 23.5.2010<br />
<br />
Hypervisor:<br />
* Kernel: [http://aur.archlinux.org/packages.php?ID=29023 kernel26-xen-dom0]<br />
* Userspace tools of Xen4: [xen4]<br />
* Needed for xen4 to compile: [http://aur.archlinux.org/packages.php?ID=36346 libxenctrl]<br />
* Userspace tools of Xen4 (conflicting with xen4: [http://aur.archlinux.org/packages.php?ID=28984 xen-hv-tools]<br />
* Some (debian) scripts for disk creation, etc: [xen-tools]<br />
<br />
Guest:<br />
* Kernel needed to boot the DomU: [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* Guest utils: [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utils]<br />
<br />
Monitoring:<br />
<br />
???:<br />
* libxen4 4.0.0-2<br />
<br />
* libxenserver 5.5.0-2<br />
<br />
Unrelated packages (for XenServer/Xen Cloud Platform):<br />
* openxencenter<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]<br />
* Xen kernel patches: [http://code.google.com/p/gentoo-xen-kernel/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Talk:Xen&diff=107054Talk:Xen2010-05-23T17:08:41Z<p>Friesoft: </p>
<hr />
<div>'''<big>For maximum benefit I suggest using the discussion page for page editing collaboration.</big>'''<br />
<br />
----<br />
<br />
'''Firstly''', some things on the current page aren't really clear, imho:<br />
<br />
* "The standard arch kernel can be use to boot the domUs."<br/><br />
So, domUs can use the standard kernel?<br />
<br />
* "In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf"<br/><br />
On the domUs?<br />
<br />
* "The basic idea behind adding a domU is as follows. We must get the domU kernels (...)"<br/><br />
BUT one can also use a standard archlinux kernel? Maybe there is an inconsistency.<br />
<br />
'''Secondly''':<br />
* Maybe someone more experienced can explain how to use encrypted (LUKS) filesystems with xen;<br />
* Maybe someone more experienced can add a section on how to use other OSs like Microsoft&reg; Windows&trade.<br />
* I opt to delete the alternative installation method because experienced users can figure this stuff out for themselves and it might be horribly outdated;<br />
<br />
-- [[User:Voidzero|Voidzero]] 16:54, 7 May 2010 (EDT)<br />
<br />
'''Thirdly''':<br />
This is incorrect - was stated before to be a frontend for Xen - it is NOT - it is a frontend for xenserver by citrix<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
this is also not for xen directly - it is for [http://www.xen.org/products/cloudxen.html xen cloud platform] which is something different afaik<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
-- [[User:Friesoft|Friesoft]] 19:01, 23 May 2010 (EDT)</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107053Xen2010-05-23T17:07:27Z<p>Friesoft: removed xvp - it's only for xen cloud platform (xcp)</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
virt-manager<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Talk:Xen&diff=107051Talk:Xen2010-05-23T17:01:19Z<p>Friesoft: </p>
<hr />
<div>'''<big>For maximum benefit I suggest using the discussion page for page editing collaboration.</big>'''<br />
<br />
----<br />
<br />
'''Firstly''', some things on the current page aren't really clear, imho:<br />
<br />
* "The standard arch kernel can be use to boot the domUs."<br/><br />
So, domUs can use the standard kernel?<br />
<br />
* "In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf"<br/><br />
On the domUs?<br />
<br />
* "The basic idea behind adding a domU is as follows. We must get the domU kernels (...)"<br/><br />
BUT one can also use a standard archlinux kernel? Maybe there is an inconsistency.<br />
<br />
'''Secondly''':<br />
* Maybe someone more experienced can explain how to use encrypted (LUKS) filesystems with xen;<br />
* Maybe someone more experienced can add a section on how to use other OSs like Microsoft&reg; Windows&trade.<br />
* I opt to delete the alternative installation method because experienced users can figure this stuff out for themselves and it might be horribly outdated;<br />
<br />
-- [[User:Voidzero|Voidzero]] 16:54, 7 May 2010 (EDT)<br />
<br />
'''Thirdly''':<br />
This is incorrect - was stated before to be a frontend for Xen - it is NOT - it is a frontend for xenserver by citrix<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
-- [[User:Friesoft|Friesoft]] 19:01, 23 May 2010 (EDT)</div>Friesofthttps://wiki.archlinux.org/index.php?title=Talk:Xen&diff=107050Talk:Xen2010-05-23T17:00:22Z<p>Friesoft: </p>
<hr />
<div>'''<big>For maximum benefit I suggest using the discussion page for page editing collaboration.</big>'''<br />
<br />
----<br />
<br />
'''Firstly''', some things on the current page aren't really clear, imho:<br />
<br />
* "The standard arch kernel can be use to boot the domUs."<br/><br />
So, domUs can use the standard kernel?<br />
<br />
* "In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf"<br/><br />
On the domUs?<br />
<br />
* "The basic idea behind adding a domU is as follows. We must get the domU kernels (...)"<br/><br />
BUT one can also use a standard archlinux kernel? Maybe there is an inconsistency.<br />
<br />
'''Secondly''':<br />
* Maybe someone more experienced can explain how to use encrypted (LUKS) filesystems with xen;<br />
* Maybe someone more experienced can add a section on how to use other OSs like Microsoft&reg; Windows&trade.<br />
* I opt to delete the alternative installation method because experienced users can figure this stuff out for themselves and it might be horribly outdated;<br />
<br />
-- [[User:Voidzero|Voidzero]] 16:54, 7 May 2010 (EDT)<br />
<br />
'''Thirdly''':<br />
This is incorrect - was stated before to be a frontend for Xen - it is NOT - it is a frontend for xenserver by citrix<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107049Xen2010-05-23T16:59:21Z<p>Friesoft: Removed openxencenter as it's only for citrix xenserver</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107041Xen2010-05-23T14:27:09Z<p>Friesoft: Added alternative way of compiling a package</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note: this didn't work for me (friesoft, 23.5.2010)<br />
<br />
Alternative method (compiles fine, problems with install because of conflicting firmware files)<br />
edit the pkgbuild like mentioned in my comment (friesoft, Sun, 23 May 2010 14:21:11): http://aur.archlinux.org/packages.php?ID=29023<br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107029Xen2010-05-23T12:29:01Z<p>Friesoft: Some rephrasing</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing (23.5.2010) the current version uses a patched kernel version 2.6.31, which won't compile using gcc 4.5.''' <br />
<br />
The problem can be worked around by temporarily downgrading gcc and gcc-libs to version 4.4.3. It may also be needed to build the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107023Xen2010-05-23T11:41:08Z<p>Friesoft: Moved gcc 4.4.3 to the kernel section - seems like I've mixed things a bit up :(</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
'''Please note: At the time of this writing the current version uses a patched kernel version 2.6.31, and it won't compile when you use gcc 4.5.''' I worked around this problem by temporarily downgrading gcc and gcc-libs to version 4.4.3. When I found out that it wasn't enough because of a missing library, I consequently built the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107022Xen2010-05-23T11:33:24Z<p>Friesoft: moved around libxenctrl install and added some comments</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre><br />
yaourt -S libxenctrl ## this is currently missing from the dependencies of xen4 (23.5.2010)<br />
yaourt -S xen4<br />
</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
'''Please note: At the time of this writing the current version uses a patched kernel version 2.6.31, and it won't compile when you use gcc 4.5.''' I worked around this problem by temporarily downgrading gcc and gcc-libs to version 4.4.3. When I found out that it wasn't enough because of a missing library, I consequently built the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4 # not sure if this is really needed<br />
</pre><br />
Note by friesoft: xen4 compile on gcc 4.4.3 works, testing 4.5 now<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107021Xen2010-05-23T11:30:07Z<p>Friesoft: added missing bit to compile on gcc 4.4.3</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre>yaourt -S xen4</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
'''Please note: At the time of this writing the current version uses a patched kernel version 2.6.31, and it won't compile when you use gcc 4.5.''' I worked around this problem by temporarily downgrading gcc and gcc-libs to version 4.4.3. When I found out that it wasn't enough because of a missing library, I consequently built the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4<br />
yaourt -S libxenctrl<br />
</pre><br />
Note by friesoft: xen4 compile on gcc 4.4.3 works, testing 4.5 now<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesofthttps://wiki.archlinux.org/index.php?title=Xen&diff=107014Xen2010-05-23T10:53:53Z<p>Friesoft: Added commands and note about install</p>
<hr />
<div>[[Category:Emulators (English)]]<br />
[[Category:HOWTOs (English)]]<br />
This document explains how to setup Xen for Arch Linux.<br />
<br />
==What is Xen?==<br />
According to the Xen development team: "The Xen hypervisor, the powerful open source industry standard for virtualization, offers a powerful, efficient, and secure feature set for virtualization of x86, x86_64, IA64, PowerPC, and other CPU architectures. It supports a wide range of guest operating systems including Windows®, Linux®, Solaris®, and various versions of the BSD operating systems."<br />
<br />
The Xen hypervisor is a thin layer of software which emulates a computer architecture. It is started by the boot loader and allows several operating systems to run simultaneously on top of it. Once the Xen hypervisor is loaded, is starts the "dom0" (for "domain 0"), or privileged domain, which in our case runs a modified Linux kernel (other possible dom0 operating systems are NetBSD and OpenSolaris). The dom0 kernel in the AUR is currently based on Linux kernel 2.6.31.5; hardware must be supported by this kernel to run Xen. Once the dom0 has started, one or more "domU" (for "user") or unprivileged domains can be started and controlled from the dom0.<br />
<br />
==Setting up Xen==<br />
<br />
===Installing the necessary packages===<br />
Before building xen4, you need to build the required package lib32-glibc-devel from the AUR first. <br />
<pre>yaourt -S lib32-glibc-devel</pre><br />
<br />
After installing it, create a temporary symlink:<br />
<pre>ln -s /opt/lib32/usr/include/gnu/stubs-32.h /usr/include/gnu</pre><br />
<br />
The next step is to install xen from the AUR. You can either install xen version 3 or 4 by choosing either the xen or xen4 package. This wiki focuses on xen4, to prevent confusion :)<br />
<pre>yaourt -S xen4</pre><br />
<br />
If you need only the xen-tools, version 4 can be installed by using the xen-hv-tools package.<br />
<br />
'''Please note: At the time of this writing the current version uses a patched kernel version 2.6.31, and it won't compile when you use gcc 4.5.''' I worked around this problem by temporarily downgrading gcc and gcc-libs to version 4.4.3. When I found out that it wasn't enough because of a missing library, I consequently built the gmp4 package.<br />
You can possibly find gcc-4.4.3 and gcc-libs-4.4.3 on ARM (see [http://wiki.archlinux.org/index.php/Downgrading_Packages#Finding_Your_Older_Version Downgrading Packages])<br />
<pre><br />
pacman -Ud gcc-*<br />
yaourt -S gmp4<br />
</pre><br />
Note: this doesn't work for me as described (friesoft, 23.5.2010)<br />
<br />
The next step is to build and install the dom0 kernel. To do so, build the kernel26-xen-dom0 package from the AUR.<br />
<br />
<pre>yaourt -S kernel26-xen-dom0</pre><br />
<br />
..and there you go: the building part has been finished. Now you can configure Grub and boot into the kernel that's just been built.<br />
<br />
===Configuring GRUB===<br />
Grub must be configured so that the Xen hypervisor is booted followed by the dom0 kernel. Add the following entry to /boot/grub/menu.lst:<br />
<br />
<pre><br />
title Xen with Arch Linux<br />
root (hd0,X)<br />
kernel /xen.gz dom0_mem=524288<br />
module /vmlinuz26-xen-dom0 root=/dev/sdaY ro console=tty0 vga=gfx-1024x768x8<br />
module /kernel26-xen-dom0.img<br />
</pre><br />
<br />
where X and Y are the appropriate numbers for your disk configuration; and dom0_mem, console, and vga are optional, customizable parameters. Nice little detail: you can use LVM volumes too. So instead of /dev/sdaY you can also fill in /dev/mapper/somelvm. Also, notice that the vga-parameter works a bit differently from usual kernel configuration lines. It has been discussed on the Xen-devel list: [http://lists.xensource.com/archives/html/xen-devel/2008-05/msg00576.html]<br />
<br />
The standard arch kernel can be use to boot the domUs. In order for this to work one must add 'xen-blkfront' to the modules array in /etc/mkinitcpio.conf:<br />
<br />
<pre><br />
MODULES="... xen-blkfront ..."<br />
</pre><br />
<br />
So, next step is to reboot into the xen kernel. <br />
<br />
Next step: start xend:<br />
<br />
<pre><br />
# /etc/rc.d/xend start<br />
</pre><br />
<br />
Allocating a fixed amount of memory is recommended when using xen. Also, if you're running IO intensive guests it might be a good idea to dedicate (pin) a CPU core only for dom0 use. Please see the external [http://wiki.xensource.com/xenwiki/XenCommonProblems XenCommonProblems] wiki page section "Can I dedicate a cpu core (or cores) only for dom0?" for more information. <br />
<br />
===Configuring GRUB2===<br />
<br />
This works just like with Grub, but here you need to use the command 'multiboot' instead of using 'kernel'. So it becomes:<br />
<pre><br />
# (2) Arch Linux(XEN)<br />
menuentry "Arch Linux(XEN)" {<br />
set root=(hd0,X)<br />
multiboot /boot/xen.gz dom0_mem=2048M<br />
module /boot/vmlinuz26-xen-dom0 root=/dev/sdaY ro<br />
module /boot/kernel26-xen-dom0.gz<br />
</pre><br />
<br />
If you had success when booting up into the dom0 kernel, we can continue.<br />
<br />
===Add domU instances===<br />
<br />
The basic idea behind adding a domU is as follows. We must get the domU kernels, allocate space for the virtual hard disk, create a configuration file for the domU, and finally start the domU with xm.<br />
<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/var/lib/pacman<br />
$ mkdir /tmp/install/var/cache/pacman/pkg<br />
$ pacman -Sy base -r /tmp/install --cache-dir /tmp/install/var/cache/pacman/pkg<br />
$ mount -o bind /dev /tmp/install/dev<br />
$ mount -t proc none /tmp/install/proc<br />
$ mount -o bind /sys /tmp/install/sys<br />
$ chroot /tmp/install /bin/bash<br />
$ vi /etc/resolv.conf<br />
$ vi /etc/fstab<br />
/dev/xvda / ext4 defaults 0 1<br />
<br />
$ vi /etc/inittab<br />
c1:2345:respawn:/sbin/agetty -8 38400 hvc0 linux<br />
#c1:2345:respawn:/sbin/agetty -8 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 38400 tty6 linux<br />
<br />
<br />
$ exit ## exit chroot<br />
$ umount /tmp/install/dev<br />
$ umount /tmp/install/proc<br />
$ umount /tmp/install/sys<br />
$ umount /tmp/install<br />
</pre><br />
If not starting from a fresh install and one wants to rsync from an existing system:<br />
<pre><br />
$ mkfs.ext4 /dev/sdb1 ## format lv partition<br />
$ mkdir /tmp/install<br />
$ mount /dev/sdb1 /tmp/install<br />
$ mkdir /tmp/install/{proc,sys}<br />
$ chmod 555 /tmp/install/proc<br />
$ rsync -avzH --delete --exclude=proc/ --exclude=sys/ old_ooga:/ /tmp/install/<br />
<br />
$ vi /etc/xen/dom01 ## create config file<br />
# -*- mode: python; -*-<br />
kernel = "/boot/vmlinuz26"<br />
ramdisk = "/boot/kernel26.img"<br />
memory = 1024<br />
name = "dom01"<br />
vif = [ 'mac=00:16:3e:00:01:01' ]<br />
disk = [ 'phy:/dev/sdb1,xvda,w' ]<br />
dhcp="dhcp"<br />
hostname = "ooga"<br />
root = "/dev/xvda ro"<br />
<br />
$ xm create -c dom01<br />
</pre><br />
<br />
== Alternative method ==<br />
<br />
To set up Xen by hand, see [[Xen_Install]]. This page is marked for deletion because it's very old, so be careful. It details installing a custom xen kernel and the xen userland tools by hand, rather than by taking advantage of packages in the AUR, as described above. Perhaps someone can write an up to date howto in this page instead, so that we can delete the other article.<br />
<br />
==Arch as Xen guest (PV mode)==<br />
<br />
To get paravirtualization you need to install:<br />
* [http://aur.archlinux.org/packages.php?ID=16087 kernel26-xen]<br />
* (optional) [http://aur.archlinux.org/packages.php?ID=28591 xe-guest-utilities]<br />
<br />
Change mode to PV with commands (on dom0):<br />
xe vm-param-set uuid=<vm uuid> HVM-boot-policy=""<br />
xe vm-param-set uuid=<vm uuid> PV-bootloader=pygrub<br />
<br />
Edit /boot/grub/menu.lst and add kernel26-xen:<br />
# (1) Arch Linux (domU)<br />
title Arch Linux (domU)<br />
root (hd0,0)<br />
kernel /boot/vmlinuz26-xen root=/dev/xvda1 ro console=hvc0<br />
initrd /boot/kernel26-xen.img<br />
<br />
=== xe-guest-utilities ===<br />
To use xe-guest-utilities, add xenfs mount point into /etc/fstab:<br />
xenfs /proc/xen xenfs defaults 0 0<br />
and add xe-linux-distribution into /etc/rc.conf:DAEMONS array.<br />
<br />
===Notes===<br />
* pygrub does not show boot menu.<br />
* To avoid hwclock error messages, set HARDWARECLOCK="xen" in /etc/rc.conf (actually you can use any value here except "UTC" and "localtime")<br />
* If you want to return to hardware VM, set HVM-boot-policy="BIOS order"<br />
<br />
==Xen management tools==<br />
You can use following tools:<br />
* [http://aur.archlinux.org/packages.php?ID=36458 xvp] is web interface and vnc proxy.<br />
* [http://aur.archlinux.org/packages.php?ID=34398 openxencenter] or [http://aur.archlinux.org/packages.php?ID=36074 openxencenter-svn] is GUI similar to citrix xen console.<br />
<br />
After installing xvp, you need to generate /etc/xvp.conf with xvpdiscover tool and adjust your web server for using /srv/http/xvpweb/.<br />
<br />
==Resources==<br />
<br />
* Xen's homepage: [http://www.xen.org/]<br />
* The Xen Wiki: [http://wiki.xensource.com/xenwiki/]</div>Friesoft