https://wiki.archlinux.org/api.php?action=feedcontributions&user=Vostok4&feedformat=atomArchWiki - User contributions [en]2024-03-29T11:08:30ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Arch_Linux_on_a_VPS&diff=245405Arch Linux on a VPS2013-01-28T22:40:08Z<p>Vostok4: /* OpenVZ */</p>
<hr />
<div>[[Category:Getting and installing Arch]]<br />
[[Category:Virtualization]]<br />
{{Article summary start}}<br />
{{Article summary text|This article discusses the use of Arch Linux on Virtual Private Servers, and includes some fixes and installation instructions specific to VPSes.}}<br />
{{Article summary heading|Related}}<br />
{{Article summary wiki|Comprehensive Server Guide}}<br />
{{Article summary end}}<br />
From [[Wikipedia:Virtual private server]]:<br />
<br />
:''Virtual private server (VPS) is a term used by Internet hosting services to refer to a virtual machine. The term is used for emphasizing that the virtual machine, although running in software on the same physical computer as other customers' virtual machines, is in many respects functionally equivalent to a separate physical computer, is dedicated to the individual customer's needs, has the privacy of a separate physical computer, and can be configured to run server software.''<br />
<br />
==Providers that offer Arch Linux==<br />
<br />
{{Warning|We cannot vouch for the honesty or quality of any provider. Please conduct due diligence before ordering.}}<br />
{{Note|This list is for providers with a convenient Arch Linux image. Using Arch on other providers is probably possible, but would require loading custom ISOs or disk images or [[Installation Chroot|installing under chroot]].}}<br />
<br />
{| border="1"<br />
! Provider !! Arch Release !! Virtualization !! Locations !! Notes<br />
|-<br />
| [http://123systems.net 123 Systems] || 2010.05 i686/x86_64 || OpenVZ || Dallas, TX || Arch available as a selection upon reinstall. Very old (2.6.18-308) kernel - See [[Virtual_Private_Server#OpenVZ:_kernel_too_old_for_glibc|OpenVZ troubleshooting]].<br />
|-<br />
| [http://afterburst.com/ Afterburst] || 2010.05 i686/x86_64 || OpenVZ || Miami (USA), Falkenstein (Germany) || Formerly FanaticalVPS, kernel version depends on what node your VPS is on, the ones in Miami are fine (2.6.32-042stab062.2) but some of the ones in Germany require a [[Virtual_Private_Server#OpenVZ:_kernel_too_old_for_glibc|custom glibc]].<br />
|-<br />
| [http://alienvps.com/ AlienVPS] || 2010.05 || Xen, KVM || Los Angeles, New York ||<br />
|-<br />
| [https://www.clodo.ru/ Clodo.ru] || 2011.?? || Xen || Moscow || Can pay per hour. Lists an invalid release version of the installer.<br />
|-<br />
| [http://en.edis.at/ Edis] || 2011.08 i686/x86_64 || vServer, KVM || Austria, Chile, Germany, France, Hong Kong, Italy, Iceland, Poland, Sweden, Switzerland, Spain, UK, USA ||<br />
|-<br />
| [http://eoreality.net/ EOReality] || (?) i686/x86_64 || OpenVZ || Chicago || Need to use special glibc-vps repo for this provider . See [[Virtual_Private_Server#OpenVZ:_kernel_too_old_for_glibc|OpenVZ troubleshooting]] for instructions. You will also need to remove heimdal.<br />
|-<br />
| [https://www.directvps.nl/ DirectVPS] || 2012.09 x86_64 || Xen || Amsterdam, Rotterdam || <br />
|-<br />
| [http://generation-host.com Generation-Host] || 2012.07 || Xen || Chicago IL, Clifton NJ and Toronto ON Canada ||<br />
|-<br />
| [https://www.gigatux.com/virtual.php GigaTux] || 2011.08 x86_64 || Xen || Chicago, Frankfurt, Israel, London, San Jose ||<br />
|-<br />
| [http://www.vr.org/ Host Virtual] || 2011.08 || Xen || Amsterdam, Chennai (Madras), Chicago, Dallas, Hong Kong, London, Los Angeles, New York, Paris, Reston, San Jose ||<br />
|-<br />
| [https://hostigation.com/ Hostigation] || 2010.05 i686 || OpenVZ, KVM || Charlotte, Los Angeles || You can [[Migrating Between Architectures Without Reinstalling|migrate to x86_64]].<br />
|-<br />
| [http://www.intovps.com IntoVPS] || 2012.05 i686/x86_64 || OpenVZ || Amsterdam, Bucharest, Dallas, Fremont, London ||<br />
|-<br />
| [https://www.linode.com Linode.com] || 2012.07 || Xen || Atlanta, Dallas, Fremont, London, Newark, Tokyo || Uses a custom kernel; do not install the {{pkg|linux}} package.<br />
|-<br />
| [http://lylix.net/home Lylix] || 2007.08 || ? || ? ||<br />
|-<br />
| [http://www.nodedeploy.com Node Deploy] || ? || OpenVZ, KVM || LA, Germany || unmanaged, solusvm server manager<br />
|-<br />
| [http://netcup.de Netcup] || 2011.10 x86_64 || vServer || Germany || _Very_ poor customerr service. Inability to cancel subscriptions without posting a signed mail.<br />
|-<br />
| [http://onepoundwebhosting.co.uk OnePoundWebHosting] || 2012.09 x86_64 || Xen || UK ||<br />
|-<br />
| [http://openvz.ca/ OpenVZ.ca] || 2010.05 i686/x86_64 || OpenVZ || Canada ||<br />
|-<br />
| [https://www.proplay.biz/ proPlay.de] || 2011.10 i686/x86_64 || OpenVZ, KVM || Germany ||<br />
|-<br />
| [http://www.rackspace.com/cloud/cloud_hosting_products/servers/ Rackspace Cloud] || 2012.08 || Xen || Chicago, Dallas, London || Billed per hour.<br />
|-<br />
| [http://www.ramhost.us RamHost.us] || 2012.12 || OpenVZ, KVM || Atlanta, England, Germany, Los Angeles || You can request a newer iso on IRC. ||<br />
|-<br />
| [http://www.tilaa.nl/ Tilaa] || 2012.12 i686/x86_64 || KVM || Amsterdam ||<br />
|-<br />
| [https://www.transip.nl/ TransIP] || 2011.08 || KVM || Amsterdam ||<br />
|-<br />
| [http://www.xenvz.co.uk/ XenVZ] || 2009.12 x86_64 || OpenVZ, Xen || UK? ||<br />
|-<br />
| [http://www.virpus.com/ Virpus] || 2010.05 x86_64 || OpenVZ, Xen || Kansas City ||<br />
|-<br />
| [http://www.vmline.pl/ Vmline] || 2012.08.04-dual.iso || Xen-HVM || Poland - Kraków || [http://www.s-net.pl/en/ S-Net] reseller. It's probably imposible to install i686 due to lack of xen_netfront and xen_blkfront modules ||<br />
|-<br />
| [https://vps6.net/ VPS6.NET] || 2010.05 i686/x86_64 OpenVZ, 2012.01 x86_64 Xen || OpenVZ, Xen || Germany, Romania, Turkey, USA ||<br />
|-<br />
| [http://www.uk2.net/ UK2.net] || 2010.05 i686/x86_64 || Xen || United Kingdom || Appears to use a custom kernel; do not install the {{pkg|linux}} package.<br />
|}<br />
<br />
==Installation==<br />
<br />
===KVM===<br />
{{Expansion|Are there instructions specific to VPSes?}}<br />
See [[KVM#Preparing an (Arch) Linux guest]].<br />
<br />
===OpenVZ===<br />
{{Expansion|Move some of the [[#Troubleshooting]] instructions here.}}<br />
<br />
====Getting a 2010.05 Image Up To Date====<br />
<br />
These instructions you have a 2010.05 image from your VPS provider and you'd like to get it up to scratch. The biggest work involves preparing /lib for the symlink upgrade (glibc 2.16, and later filesystem 2013.01). If you are on a older kernel than 2.6.32, please refer further down the page to get the glibc-vps repo working (just add the repo and you can follow these steps).<br />
<br />
To start, grab the latest busybox from http://busybox.net/downloads/binaries/latest/. This allows you to force glibc (losing /lib temporarily) without losing your OS (busybox comes with its own GNU tools which are statically linked).<br />
<br />
{{bc|wget http://busybox.net/downloads/binaries/latest/busybox-i686<br />
chmod +x busybox-i686}}<br />
<br />
First off you can get a list of packages that own files in /lib with the following command:<br />
{{bc|<nowiki><br />
pacman -Qo /lib/* | cut -d' ' -f 5 | egrep -v 'glibc' | uniq | xargs<br />
</nowiki>}}<br />
<br />
For the current 2010.05 that comes straight off of ibiru's page, these are the packages that were required to be removed for me:<br />
<br />
{{bc|pacman -S acl attr util-linux-ng bzip2 libcap e2fsprogs libgcrypt libgpg-error udev readline ncurses pam pcre popt procps readline shadow e2fsprogs sysfsutils udev util-linux-ng sysvinit coreutils}}<br />
<br />
You may have to remove /lib/udev/devices/loop0 (a simple rm works).<br />
<br />
After the upgrade finishes, you must remove any extra empty directories in /lib (/lib/modules is the common offender):<br />
{{bc|rm -rf /lib/modules}}<br />
<br />
Install tzdata to fix some dependencies and remove /etc/profile.d/locale.sh:<br />
{{bc|pacman -S tzdata<br />
rm /etc/profile.d/locale.sh}}<br />
<br />
Remove /var/run (you should have nothing running that matters):<br />
{{bc|rm -rf /var/run}}<br />
<br />
Force glibc (this will pull in the latest filesystem, but BREAK everything (other than busybox)):<br />
{{bc|pacman -S --force glibc}}<br />
<br />
Now you will have a broken system, so first thing symlink /usr/lib to /lib with busybox's ln:<br />
{{bc|./busybox-i686 ln -s /usr/lib /lib}}<br />
<br />
And you should have a fully functional system where you can now update pacman.<br />
<br />
{{bc|pacman -S pacman; pacman-key --init; pacman-key --populate archlinux; pacman-db-upgrade; pacman -Syy}}<br />
<br />
Now, update initscripts to get iproute2:<br />
<br />
{{bc|pacman -S iniscripts}}<br />
<br />
Install makedev:<br />
{{bc|pacman -S makedev}}<br />
<br />
Add the following to your /etc/rc.local:<br />
{{bc|/usr/sbin/MAKEDEV tty<br />
/usr/sbin/MAKEDEV pty}}<br />
<br />
Comment the following lines in /etc/inittab:<br />
{{bc|#c1:2345:respawn:/sbin/agetty -8 -s 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 -s 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 -s 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 -s 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 -s 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 -s 38400 tty6 linux}}<br />
<br />
Finally, you should be able to upgrade the whole system:<br />
<br />
{{bc|pacman -Su}}<br />
<br />
You may run into some issues with krb5 and heimdal, as krb5 no longer provides+conflicts+replaces heimdal (https://projects.archlinux.org/svntogit/packages.git/commit/trunk/PKGBUILD?h=packages/krb5&id=f5e6d77fd14ced15ebf5b6a78a7c76e0db0625f7). The old openssh depends on heimdal (and the new openssh depends on krb5), so force install krb5, then upgrade openssh, then remove heimdal and reinstall krb5.<br />
<br />
{{bc|pacman -S --force krb5<br />
pacman -S openssh openssl<br />
pacman -R heimdal<br />
pacman -S krb5}}<br />
<br />
Fix syslog-ng (set the src to unix-dgram("/dev/log") and add --no-caps to both check and run args in /etc/conf.d/syslog-ng).<br />
<br />
Make sure your rc.conf isn't messed up with broken network definitions, or else be sure serial access works on your VPS before you reboot.<br />
<br />
===Xen===<br />
{{Expansion|Are there instructions specific to VPSes?}}<br />
See [[Xen#Arch as Xen guest (PVHVM mode)]] and/or [[Xen#Arch as Xen guest (PV mode)]].<br />
<br />
==Troubleshooting==<br />
===OpenVZ: kernel too old for glibc===<br />
Are you on a virtual private server (VPS) with an old kernel & broke your system? Are you using OpenVZ?<br />
<br />
Check your kernel version with:<br />
<br />
{{bc|uname -r}}<br />
<br />
If your kernel is older than 2.6.32, you will need a custom version of glibc ([https://www.archlinux.org/news/minimum-kernel-requirement-2632/ because of dependencies in glibc]).<br />
<br />
Arch Template Used: https://dev.archlinux.org/~ibiru/openvz/2010.05/arch-2010.05-i686-minimal.tar.gz<br />
<br />
{{Note|for installs that have not been updated to glibc-2.16, it will save you lots of time and prevent major breakage to do:<br />
pacman -U https://dev.archlinux.org/~ibiru/openvz/glibc-vps/i686/glibc-2.16.0-101-i686.pkg.tar.xz<br />
or<br />
pacman -U https://dev.archlinux.org/~ibiru/openvz/glibc-vps/x86_64/glibc-2.16.0-101-x86_64.pkg.tar.xz<br />
Add a single "-d" if needed. ''The instructions below assume that this has been done.''<br />
}}<br />
<br />
<br />
Following similar instructions from [[DeveloperWiki:usrlib]].<br />
<br />
Try doing the following to fix it:<br />
<br />
1) Edit {{ic|/etc/pacman.conf}} and add the following repository '''ABOVE [core]''':<br />
<br />
for 32-bit:<br />
<br />
{{bc|<nowiki>[glibc-vps]<br />
Server = https://dev.archlinux.org/~ibiru/openvz/glibc-vps/i686</nowiki>}}<br />
<br />
for 64-bit:<br />
<br />
{{bc|<nowiki>[glibc-vps]<br />
Server = https://dev.archlinux.org/~ibiru/openvz/glibc-vps/x86_64</nowiki>}}<br />
<br />
2) Then run {{ic|pacman -Syy}} followed by {{ic|pacman -Syu}}. You will be notified to upgrade pacman first.<br />
<br />
3) Upgrade the [[pacman]] database by running {{ic|pacman-db-upgrade}} as root.<br />
<br />
4) Edit {{ic|/etc/pacman.conf.pacnew}} (new pacman config file) and add the following repository '''ABOVE [core]''':<br />
<br />
{{bc|<nowiki>[glibc-vps]<br />
Server = https://dev.archlinux.org/~ibiru/openvz/glibc-vps/$arch</nowiki>}}<br />
<br />
5) Replace {{ic|/etc/pacman.conf}} with {{ic|/etc/pacman.conf.pacnew}} (run as root):<br />
<br />
{{bc|mv /etc/pacman.conf.pacnew /etc/pacman.conf}}<br />
<br />
6) Upgrade your whole system with new packages again {{ic|pacman -Syu}}<br />
<br />
If you get the following or similar error:<br />
{{bc|initscripts: /etc/profile.d/locale.sh exists in filesystem}}<br />
<br />
Simply delete that file (e.g., {{ic|rm -f /etc/profile.d/locale.sh}}), then run {{ic|pacman -Syu}} again.<br />
<br />
<br />
If you get the following or similar error:<br />
{{bc|filesystem: /etc/mtab exists in filesystem}}<br />
<br />
Run {{ic|pacman -S filesystem --force}}<br />
<br />
<br />
If you get the following or similar error:<br />
{{bc|libusb-compat: /usr/bin/libusb-config exists in filesystem}}<br />
<br />
Run {{ic|pacman -S libusb}} and then {{ic|pacman -S libusb-compat}}<br />
<br />
7) Before rebooting, you need to [[pacman|install]] the {{Pkg|makedev}} package by running {{ic|pacman -S makedev}}.<br />
<br />
8) Add MAKEDEV to {{ic|/etc/rc.local}}:<br />
<br />
{{bc|/usr/sbin/MAKEDEV tty<br />
/usr/sbin/MAKEDEV pty}}<br />
<br />
9) Edit {{ic|/etc/inittab}}, comment out the following lines (otherwise you will see errors in {{ic|/var/log/errors.log}}):<br />
<br />
{{bc|#c1:2345:respawn:/sbin/agetty -8 -s 38400 tty1 linux<br />
#c2:2345:respawn:/sbin/agetty -8 -s 38400 tty2 linux<br />
#c3:2345:respawn:/sbin/agetty -8 -s 38400 tty3 linux<br />
#c4:2345:respawn:/sbin/agetty -8 -s 38400 tty4 linux<br />
#c5:2345:respawn:/sbin/agetty -8 -s 38400 tty5 linux<br />
#c6:2345:respawn:/sbin/agetty -8 -s 38400 tty6 linux}}<br />
<br />
10) To enable the use of the {{ic|hostname}} command, [[pacman|install]] the package {{Pkg|inetutils}} from the [[Official Repositories|official repositories]]. <br />
<br />
11) Remove disabling of SysRq key since this is blocked by OpenVZ and causes errors<br />
<br />
Edit {{ic|/etc/sysctl.conf}}, comment out the following line:<br />
{{bc|1=#kernel.sysrq = 0}}<br />
<br />
12) Save and reboot.<br />
<br />
Enjoy & thank ioni if you happen to be in #archlinux<br />
<br />
===Moving your VPS from network configuration in rc.conf to netcfg (tested with OpenVZ)===<br />
<br />
1) Install netcfg<br />
<br />
{{bc|pacman -S netcfg}}<br />
<br />
2) Create a netcfg configuration file {{ic|/etc/network.d/venet}}<br />
<br />
{{bc|1=CONNECTION='ethernet'<br />
DESCRIPTION='VPS venet connection'<br />
INTERFACE='venet0'<br />
IP='static'<br />
IPCFG=(<br />
#IPv4 address<br />
'addr add xxx.xxx.xxx.xxx/32 broadcast 0.0.0.0 dev venet0'<br />
#IPv4 route<br />
'route add default dev venet0'<br />
#IPv6 address<br />
'addr add xxxx:xx:xx::x/128 dev venet0'<br />
#IPv6 route<br />
'-6 route add default dev venet0'<br />
)<br />
DNS=('xxx.xxx.xxx.xxx' 'xxx.xxx.xxx.xxx')}}<br />
<br />
3) Edit your netcfg main conf file {{ic|/etc/conf.d/netcfg}}<br />
<br />
{{bc|1=NETWORKS=(venet)<br />
WIRED_INTERFACE="venet0"}}<br />
<br />
4) Try your new setup<br />
<br />
{{bc|rc.d stop network && ip addr flush venet0 && netcfg venet}}<br />
<br />
Your VPS should still be connected and have its IP addresses set correctly. (Check with {{ic|ip a}})<br />
<br />
DO NOT proceed to next step if this isn't the case.<br />
<br />
5) Make your new setup survive reboots<br />
<br />
In the {{ic|DAEMONS}} array in {{ic|/etc/rc.conf}}, replace {{ic|network}} with {{ic|net-profiles}}.<br />
<br />
Remove all networking information that is in {{ic|/etc/rc.conf}}.<br />
{{bc|reboot}}<br />
<br />
===SSH fails: PTY allocation request failed on channel 0===<br />
<br />
Some VPSes have an outdated {{ic|rc.sysinit}}. You may be able to login via serial console or with<br />
<br />
{{bc|> ssh root@broken.server '/bin/bash -i'}}<br />
<br />
Then run the following:<br />
<br />
{{bc|# mv /etc/rc.sysinit.pacnew /etc/rc.sysinit<br />
# reboot}}<br />
<br />
Once it’s working, you should be able to comment out the {{ic|udevd_modprobe}} line in {{ic|rc.sysinit}} to save a bit of RAM the next time you reboot.<br />
<br />
If the above doesn’t work, take a look at<br />
http://fsk141.com/fix-pty-allocation-request-failed-on-channel-0.</div>Vostok4https://wiki.archlinux.org/index.php?title=QEMU&diff=235166QEMU2012-11-13T09:44:01Z<p>Vostok4: update qemu-ifup script to use iproute2 instead of net-tools, add qemu-ifdown script</p>
<hr />
<div>[[Category:Emulators]]<br />
[[Category:Virtualization]]<br />
[[de:Qemu]]<br />
[[fr:Qemu]]<br />
[[zh-CN:QEMU]]<br />
<br />
{{Out of date|[https://www.archlinux.org/news/deprecation-of-net-tools net-tools] is deprecated. [[QEMU#Networking|Networking]] section needs updating. |QEMU#Networking}}<br />
<br />
From the [http://wiki.qemu.org/Main_Page QEMU about page],<br />
<br />
QEMU is a generic and open source machine emulator and virtualizer.<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.<br />
<br />
When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.<br />
<br />
== Installing QEMU ==<br />
<br />
Depending on your needs, you can choose to install either {{Pkg|qemu}} or {{Pkg|qemu-kvm}} from the [extra] repository. {{Pkg|qemu}} includes support for emulating a wide variety of machine architectures, while {{Pkg|qemu-kvm}} only supports virtualizing your host architecture using [[KVM]]. It is strongly recommended to use KVM whenever possible. <br />
<br />
In the current version of QEMU (>= 0.15.0), you can use KVM with the {{Pkg|qemu}} package, if supported by your processor and kernel, provided that you start QEMU with the {{ic|-enable-kvm}} argument; this was not the case for older versions of QEMU (< 0.15.0), when not all KVM-related functions had been merged into upstream QEMU.<br />
<br />
== Creating a hard disk image==<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk. <br />
<br />
A hard disk image may simply contain the literal contents, byte for byte, of the hard disk. This is usually called ''raw'' format, and it provides the least I/O overhead, although the images may take up a large amount of space.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' that can save enormous amounts of space by only allocating space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. The following command creates a 4GB image named {{ic|myimage.qcow2}} in the qcow2 format:<br />
$ qemu-img create -f qcow2 myimage.qcow2 4G<br />
<br />
You may use {{ic|-f raw}} to create a raw disk instead, although you can also do so simply by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.<br />
<br />
== Preparing the installation media ==<br />
<br />
To install an operating system into your disk image, you need the installation media (e.g. CD-ROM, floppy, or ISO image) for the operating system.<br />
<br />
{{Tip|If you would like to run an Arch Linux virtual machine, you can install it using the [http://archlinux.org/download/ official installation media for Arch Linux]. It is also possible to set up an Arch Linux virtual machine without the installation media, provided that your host machine is running Arch Linux, although this is more difficult; it is detailed [[Creating Arch Linux disk image#Install Arch Linux in a disk image without the installation media|here]].}}<br />
<br />
The installation media should not be mounted because QEMU accesses the media directly. Also, if using physical media (e.g. CD-ROM or floppy), it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command:<br />
# dd if=/dev/cdrom of=mycdimg.iso<br />
<br />
Do the same for floppies:<br />
# dd if=/dev/fd of=myfloppy.img<br />
<br />
== Installing the operating system==<br />
<br />
To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
This is the first time you will need to start the emulator. By default, QEMU will show the virtual machine's video output in a window. <br />
One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it press {{Keypress|Ctrl+Alt}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Standard method (software emulation)===<br />
<br />
On i386 systems, to install from a bootable ISO file as CD-ROM, run QEMU with:<br />
$ qemu -cdrom <iso_image> -boot d <qemu_image><br />
<br />
On x86_64 systems:<br />
$ qemu-system-x86_64 -cdrom <iso_image> -boot d <qemu_image><br />
<br />
See the parameters in {{ic|qemu --help}} for loading other media types such as floppy or disk images, or physical drives.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly, for example on i386:<br />
<br />
$ qemu <qemu_image><br />
<br />
{{Tip|By default only 128MB of memory is assigned to the machine, the amount of memory can be adjusted with the -m switch, for example {{ic|-m 512}}.}}<br />
<br />
=== KVM method (hardware virtualization) ===<br />
<br />
KVM, short for Kernel-based Virtual Machine, is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It relies on the kernel modules {{ic|kvm}} and either {{ic|kvm-intel}} or {{ic|kvm-amd}}. KVM interfaces via {{ic|/dev/kvm}}, which requires users to be part of the {{ic|kvm}} group. <br />
<br />
When using the {{Pkg|qemu-kvm}} package the command for all architectures is:<br />
$ qemu<br />
<br />
The command to use with the standard {{Pkg|qemu}} package is:<br />
$ qemu -enable-kvm<br />
<br />
There is a dedicated [[KVM]] wiki page with more detailed information and instructions.<br />
<br />
{{Note|See [[#Windows-specific notes]] if you are installing Windows in your virtual machine.}}<br />
<br />
{{Note|If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{Keypress|Ctrl-Alt-2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{Keypress|Ctrl-Alt-1}} to go back to the virtual machine.}}<br />
<br />
== Overlay images ==<br />
<br />
A good idea is to use overlay images. This way you can a create hard disk image once and tell QEMU to store changes in an external file.<br />
This makes it easy to revert the virtual machine's disk to a previous state.<br />
<br />
To create an overlay image, type:<br />
$<nowiki> qemu-img create -b [[base_image]] -f qcow2 [[overlay_image]]</nowiki><br />
<br />
After that you can run qemu with:<br />
$ qemu [overlay_image]<br />
<br />
or if you are on a x86_64 system:<br />
$ qemu-system-x86_64 [overlay_image]<br />
<br />
and the original image will be left untouched. One hitch, the base image cannot be renamed or moved, the overlay remembers the base's full path.<br />
<br />
== Moving data between host and guest OS ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[Samba|SMB]], NBD, HTTP, [[Very Secure FTP Daemon|FTP]], or [[Secure Shell|SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[Samba|SMB]] or [[NFS]], or you can access the host's HTTP server, etc. <br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
{{Note|QEMU's "built-in" SMB server is currently (as of qemu-1.0.1-1) broken because it does not specify the {{ic|state_directory}} option in the {{ic|smb.conf}} file it writes. This issue is fixed in upstream QEMU.}}<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated configuration file and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this isn't necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
$ qemu [hd_image] -net nic -net user,smb=/path/to/shared/dir<br />
<br />
where {{ic|/path/to/shared/dir}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
=== Mounting a partition inside a raw disk image ===<br />
<br />
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.<br />
<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise data corruption could occur, unless you had mounted the partitions read-only.}}<br />
<br />
==== With manually specifying byte offset ====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
# mount -o loop,offset=32256 [hd_image] [tmp_dir]<br />
<br />
The {{ic|<nowiki>offset=32256</nowiki>}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l [hd_image]}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
==== With loop module autodetecting partitions ====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* Unload the loop [[Kernel modules|module]].<br />
# modprobe -r loop<br />
* Load the loop [[Kernel modules|module]] with the {{ic|max_part}} parameter set.<br />
# modprobe loop max_part=15<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|<nowiki>max_part=15</nowiki>}} every time, or you can put {{ic|<nowiki>loop.max_part=15</nowiki>}} on the kernel command line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
# losetup -f [os_image]<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
# mount /dev/loop0p1 [tmp_dir]<br />
<br />
==== With kpartx ====<br />
<br />
'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a /dev/loop0<br />
<br />
=== Mounting qcow2 image ===<br />
You may mount a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the filesystem layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.}}<br />
<br />
{{Warning|You must not mount a filesystem on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a filesystem and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[Kernels|kernel]] and [[initramfs|initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root filesystem as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
$ qemu -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root filesystem is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
==== Simulate virtual disk with MBR using linear RAID ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a filesystem and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
You can do this using software [[RAID]] in linear mode (you need the linear.ko kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.<br />
<br />
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some filesystem on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:<br />
$ dd if=/dev/zero of=/path/to/mbr count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
# losetup -f /path/to/mbr<br />
<br />
Let's assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hdaN}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
# fdisk /dev/md0<br />
<br />
Press {{Keypress|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{Keypress|R}} to return to the main menu. <br />
<br />
Press {{Keypress|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hdaN}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image: <br />
<br />
$ qemu -hdc /dev/md0 [...]<br />
<br />
You can of course safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hdaN}} partition contains the necessary tools.<br />
<br />
==Networking==<br />
===User-mode networking===<br />
<br />
By default, without any {{ic|-net}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU. This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. <br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, or attaching guests to virtual LANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
=== Tap networking with QEMU ===<br />
==== Basic idea ====<br />
<br />
[http://en.wikipedia.org/wiki/TUN/TAP Tap devices] are a Linux kernel feature that allows you to create virtual "tap" network interfaces that appear as real network interfaces. Packets sent to a "tap" interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as eth0. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
==== Bridge virtual machines to external network ====<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as eth0, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
{{Warning|Beware that since your virtual machines will appear directly on the external network, this may expose them to attack. Depending on what resources your virtual machines have access to, you may need to take all the precautions you normally would take in securing a computer to secure your virtual machines.}}<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it. See http://en.gentoo-wiki.com/wiki/KVM#Networking_2 .<br />
<br />
1. Make sure that the following packages are installed:<br />
*{{Pkg|bridge-utils}} (provides {{ic|brctl}}, to manipulate bridges)<br />
*{{Pkg|uml_utilities}} (provides {{ic|tunctl}}, to manipulate taps)<br />
<br />
2. Enable IPv4 forwarding by changing {{ic|<nowiki>net.ipv4.ip_forward = 0</nowiki>}} to {{ic|<nowiki>net.ipv4.ip_forward = 1</nowiki>}} in {{ic|<nowiki>/etc/sysctl.conf</nowiki>}}.<br />
<br />
3. Add {{ic|bridge}} and {{ic|tun}} to your {{ic|MODULES}} array in {{ic|/etc/rc.conf}}:<br />
<br />
MODULES=( ... bridge tun)<br />
<br />
4. Configure your bridge {{ic|br0}} to have your real Ethernet adapter (assuming {{ic|eth0}} for the rest of this guide) in it, in {{ic|/etc/conf.d/bridges}}:<br />
bridge_br0="eth0"<br />
control_br0="setfd br0 0"<br />
BRIDGE_INTERFACES=(br0)<br />
<br />
{{Note|This is not described anywhere, but adding the {{ic|control_br0}} line is vital for the bridge to work! For more details look here: {{bug|16625}}.}}<br />
<br />
5. Change your networking configuration so that you just bring up your real Ethernet adapter without configuring it, allowing real configuration to happen on the bridge interface. In {{ic|/etc/rc.conf}}:<br />
eth0="eth0 up"<br />
br0="dhcp"<br />
INTERFACES=(eth0 br0)<br />
<br />
Remember, especially if you are doing DHCP, it is essential that the bridge comes up '''after''' the real adapter, otherwise the bridge will not be able to talk to anything to get a DHCP address!<br />
<br />
If you have been giving eth0 a static IP address rather than using DHCP, give br0 similar settings:<br />
<br />
{{ic|/etc/rc.conf}}:<br />
eth0="eth0 0.0.0.0"<br />
br0="br0 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255"<br />
INTERFACES=(eth0 br0)<br />
gateway="default gw 192.168.0.1"<br />
ROUTES=(gateway)<br />
<br />
and then in {{ic|/etc/resolv.conf}}:<br />
domain lan<br />
nameserver 192.168.0.1<br />
<br />
6. Install the script that QEMU uses to bring up the tap adapter in {{ic|/etc/qemu-ifup}} with root:kvm 750 permissions:<br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /sbin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/sbin/brctl addif br0 $1<br />
sleep 2<br />
<br />
7. Install the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with root:kvm 750 permissions:<br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /sbin/ip link set $1 down<br />
sudo /usr/sbin/brctl delif br0 $1<br />
sudo /sbin/ip link delete dev $1<br />
<br />
7. Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
Cmnd_Alias QEMU=/sbin/ifconfig,/sbin/modprobe,/usr/sbin/brctl,/usr/bin/tunctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
<br />
8. Make sure the user(s) wishing to use this new functionality are in the {{ic|kvm}} group. Exit and log in again if necessary.<br />
<br />
9. You launch QEMU using the following {{ic|run-qemu}} script:<br />
#!/bin/bash<br />
USERID=`whoami`<br />
IFACE=$(sudo tunctl -b -u $USERID)<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" \<br />
$(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) \<br />
$(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
<br />
qemu-kvm -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo tunctl -d $IFACE &> /dev/null<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda myvm.img -m 512 -vga std<br />
<br />
10. If you cannot get a DHCP address in the host, it might be because [[Iptables|iptables]] are up by default in the bridge. In that case (from http://www.linux-kvm.org/page/Networking ):<br />
# cd /proc/sys/net/bridge<br />
# ls<br />
bridge-nf-call-arptables bridge-nf-call-iptables<br />
bridge-nf-call-ip6tables bridge-nf-filter-vlan-tagged<br />
# for f in bridge-nf-*; do echo 0 > $f; done<br />
<br />
And if you still cannot get networking to work, see: [[Linux_Containers#Bridge_device_setup]].<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no "real" interface (e.g. eth0) is also connected to the bridge, then the virtual machines will be able to talk to each other and the physical host. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called "host-only" networking by other virtualization software such as [[VirtualBox]].<br />
<br />
You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the 172.20.0.1/16 subnet with [[Dnsmasq]] as the DHCP server:<br />
<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[Iptables|iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called "internal" networking by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
==== Link-level address caveat ====<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
To solve this problem, the last 8 digits of the link-level address of the virtual NICs should be randomized, as in the script above, to make sure that each virtual machine has a unique link-level address.<br />
<br />
=== Networking with VDE2 ===<br />
==== What is VDE? ====<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. Your are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
VDE is in the [[Official Repositories|official repositories]], so:<br />
<br />
# pacman -S vde2<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (or add it to your {{ic|MODULES}} array in {{ic|[[rc.conf]]}}):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group kvm<br />
<br />
This line creates the switch, creates tap0, "plugs" it, and allows the users of the group {{ic|kvm}} to use it.<br />
<br />
The interface is plugged in but not configured yet. Just do it:<br />
<br />
# ifconfig tap0 192.168.100.254 netmask 255.255.255.0<br />
<br />
That is all! Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-kvm -net nic -net vde -hda ...<br />
<br />
Configure your guest as you would do in a physical network. We gave them static addresses and let them access the WAN using IP forwarding and masquerading on our host:<br />
<br />
# echo "1" > /proc/sys/net/ipv4/ip_forward<br />
# iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE<br />
<br />
==== Putting it together ====<br />
I added this init script to run all this at start-up:<br />
<br />
#!/bin/bash <br />
<br />
. /etc/rc.conf<br />
. /etc/rc.d/functions<br />
case "$1" in<br />
start)<br />
stat_busy "Starting VDE Switch"<br />
vde_switch -tap tap0 -daemon -mod 660 -pidfile $PIDFILE -group kvm<br />
if [ $? -gt 0 ]; then<br />
stat_fail<br />
else<br />
echo "1" > /proc/sys/net/ipv4/ip_forward && \<br />
iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE && \<br />
ifconfig tap0 192.168.100.254 netmask 255.255.255.0 && \<br />
stat_done || stat_fail<br />
fi<br />
;;<br />
stop)<br />
stat_busy "Stopping VDE Switch"<br />
# err.. well, i should remove the switch here...<br />
stat_done<br />
;;<br />
restart)<br />
$0 stop<br />
sleep 1<br />
# Aem.. As long as stop) is not implemented, this just fails<br />
$0 start<br />
;;<br />
*)<br />
echo "usage: $0 {start|stop|restart}" <br />
esac<br />
exit 0<br />
<br />
Well, I know it is dirty and could be more configurable. Feel free to improve it. VDE has an rc script too, but I had to make one anyway for the IP forwarding stuff.<br />
<br />
====Alternative method====<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group kvm<br />
<br />
# slirpvde --dhcp --daemon<br />
<br />
Then to start the vm with a connection to the network of the host:<br />
<br />
$ kvm -net nic,macaddr=52:54:00:00:EE:03 -net vde whatever.qcow<br />
<br />
=== Improving networking performance ===<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde, since tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. To do this, add a {{ic|<nowiki>model=virtio</nowiki>}} flag to the {{ic|-net nic}} option:<br />
<br />
-net nic,model=virtio<br />
<br />
This will only work if the guest machine has a driver for virtio network devices. Linux does, and the required driver ('''virtio_net''') is included with Arch Linux, but there is no guarantee that virtio networking will work with arbitrary operating systems. There do exist [[#Virtio drivers for Windows|virtio drivers for Windows]], but you need to install them manually.<br />
<br />
== Graphics ==<br />
QEMU can use the following different graphic outputs: std, cirrus, vmware, qxl, xenfs and vnc.<br />
With the {{ic|vnc}} option you can run your guest standalone and connect to it via VNC. Other options are using {{ic|std}}, {{ic|vmware}}, {{ic|cirrus}}.<br />
<br />
===std===<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels.<br />
<br />
===vmware===<br />
Although it is a bit buggy, it performs better than std and cirrus. On the guest, install the VMware drivers. For Arch Linux guests:<br />
# pacman -S xf86-video-vmware xf86-input-vmmouse<br />
<br />
===none===<br />
<br />
If you do not want to see the graphical output from your virtual machine because you will be accessing it entirely through the network or serial port, you can run QEMU with the {{ic|-nographic}} option.<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization progrems such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s). However, there are several GUI front-ends for QEMU:<br />
<br />
* virt-manager (part of [[libvirt]])<br />
* {{Pkg|qemu-launcher}}<br />
* qemulator (AUR)<br />
* {{Pkg|qtemu}}<br />
<br />
== Windows-specific notes ==<br />
=== Choosing a Windows version ===<br />
<br />
QEMU can run any version of Windows. However, 98, Me and XP will run at quite a low speed. You should choose either Windows 95 or Windows 2000. Surprisingly, 2000 seems to run faster than 98. The fastest one is 95, which can from time to time make you forget that you are running an emulator :)<br />
<br />
If you own both Win95 and Win98/WinME, then 98lite (from http://www.litepc.com) might be worth trying. It decouples Internet Explorer from operating system and replaces it with original Windows 95 Explorer. It also enables you to do a minimal Windows installation, without all the bloat you normally cannot disable. This might be the best option, because you get the smallest, fastest and most stable Windows this way.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
=== Windows 95 boot floppy ===<br />
<br />
If you are using the Windows 95 boot floppy, choosing SAMSUNG as the type of CD-ROM seems to work.<br />
<br />
=== Windows 2000 installation bug ===<br />
<br />
There are problems when installing Windows 2000. Windows setup will generate a lot of edb*.log files, one after the other containing nothing but blank spaces in {{ic|C:\WINNT\SECURITY}} which quickly fill the virtual hard disk. A workaround is to open a Windows command prompt as early as possible during setup (by pressing {{Keypress|Shift+F10}}) which will allow you to remove these log files as they appear by typing:<br />
del %windir%\security\*.log<br />
<br />
{{Note|According to the official QEMU website, "Windows 2000 has a bug which gives a disk full problem during its installation. When installing it, use the {{ic|-win2k-hack}} QEMU option to enable a specific workaround. After Windows 2000 is installed, you no longer need this option (this option slows down the IDE transfers)."}}<br />
<br />
=== Optimizing Windows 9X CPU usage ===<br />
<br />
Windows 9X uses an idle loop instead of the HLT (halt) instruction. Consequently, the emulator will consume all CPU resources when running Windows 9X guests &mdash; even if no work is being done. This only applies to DOS and DOS-based Windows versions (3.X, 95/98/ME) &mdash; NT-based and later Windows versions are not affected.<br />
<br />
To resolve this issue, install [http://www.benchtest.com/rain.html Rain], [http://www.benchtest.com/wfp.html Waterfall] or [http://www.benchtest.com/cpuidle.html CpuIdle] in the Windows 9X guest. (Rain might be preferred because it does only what is needed &mdash; replacing the idle loop with the HLT instruction &mdash; and nothing more.)<br />
<br />
See [https://forums.virtualbox.org/viewtopic.php?f=28&t=9918 Tutorial: Windows 95/98 guest OSes] for more information.<br />
<br />
===Remote Desktop Protocol===<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
$ qemu -nographic -net user,hostfwd=tcp::5555-:3389<br />
Then connect with either rdesktop or freerdp to the guest, for example:<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Windows virtio drivers ===<br />
<br />
You can use [http://wiki.libvirt.org/page/Virtio virtio] devices with Windows if you install the [http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio guest drivers] for Windows.<br />
<br />
== General problems ==<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
{{bc|<br />
qemu -k [keymap] [disk_image]<br />
}}<br />
<br />
=== Virtual machine runs too slowly ===<br />
<br />
There are a number of techniques that you can use to improve the performance if your virtual machine. For example:<br />
<br />
* Use KVM if possible (see [[#Using the Kernel-based Virtual Machine]]).<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024MiB of memory.<br />
* If the host machine has multiple CPUs, assign the guest more CPUs using the {{ic|-smp}} option.<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you don't do this, it may be trying to emulate a more generic CPU.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu -net nic,model=virtio -net tap,if=tap0,script=no -drive file=mydisk.raw,media=disk,if=virtio<br />
* [[#Tap networking with QEMU|Use TAP devices]] instead of user-mode networking.<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's filesystem. For example, you can mount an [[Ext4|ext4 filesystem]] with the option {{ic|<nowiki>barrier=0</nowiki>}}. You should read the documentation for any options that you change, since sometimes performance-enhancing options for filesystems come at the cost of data integrity.<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [http://en.wikipedia.org/wiki/Kernel_SamePage_Merging_(KSM) kernel same-page merging]:<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the {{ic|-balloon virtio}} option.<br />
<br />
==Starting QEMU virtual machines on boot==<br />
<br />
===With libvirt===<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
===Custom script===<br />
To run QEMU VMs on boot, you can use following rc-script and config.<br />
<br />
{| border="1"<br />
|+ Config file options<br />
|-<br />
| QEMU_MACHINES || List of VMs to start<br />
|-<br />
| qemu_${vm}_type || QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM. I.e. you can boot e.g. qemu-system-arm images with qemu_my_arm_vm_type="system-arm". If not specified, {{ic|/usr/bin/qemu}} will be used.<br />
|-<br />
| qemu_${vm} || QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic}}.<br />
|-<br />
| qemu_${vm}_haltcmd || Command to shutdown VM safely. I am using {{ic|-monitor telnet:..}} and power off my VMs via ACPI by sending {{ic|system_powerdown}} to monitor. You can use ssh or some other ways.<br />
|-<br />
| qemu_${vm}_haltcmd_wait || How much time to wait for safe VM shutdown. Default is 30 seconds. rc-script will kill qemu process after this timeout.<br />
|}<br />
<br />
Config file example:<br />
{{hc|/etc/conf.d/qemu.conf|<nowiki><br />
# VMs that should be started on boot<br />
# use the ! prefix to disable starting/stopping a VM<br />
QEMU_MACHINES=(vm1 vm2)<br />
<br />
# NOTE: following options will be prepended to qemu_${vm}<br />
# -name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic<br />
<br />
qemu_vm1_type="system-x86_64"<br />
<br />
qemu_vm1="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
qemu_vm1_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shutdown your VM correctly<br />
#qemu_vm1_haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
<br />
# By default rc-script will wait 30 seconds before killing VM. Here you can change this timeout.<br />
#qemu_vm1_haltcmd_wait="30"<br />
<br />
qemu_vm2="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
qemu_vm2_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7101"<br />
</nowiki>}}<br />
<br />
rc-script:<br />
{{hc|/etc/rc.d/qemu|<nowiki><br />
#!/bin/bash<br />
. /etc/rc.conf<br />
. /etc/rc.d/functions<br />
<br />
[ -f /etc/conf.d/qemu.conf ] && source /etc/conf.d/qemu.conf<br />
<br />
PIDDIR=/var/run/qemu<br />
QEMU_DEFAULT_FLAGS='-name ${vm} -pidfile ${PIDDIR}/${vm}.pid -daemonize -nographic'<br />
QEMU_HALTCMD_WAIT=30<br />
<br />
case "$1" in<br />
start)<br />
[ -d "${PIDDIR}" ] || mkdir -p "${PIDDIR}"<br />
for vm in "${QEMU_MACHINES[@]}"; do<br />
if [ "${vm}" = "${vm#!}" ]; then<br />
stat_busy "Starting QEMU VM: ${vm}"<br />
eval vm_cmdline="\$qemu_${vm}"<br />
eval vm_type="\$qemu_${vm}_type"<br />
<br />
if [ -n "${vm_type}" ]; then<br />
vm_cmd="/usr/bin/qemu-${vm_type}"<br />
else<br />
vm_cmd='/usr/bin/qemu'<br />
fi<br />
<br />
eval "qemu_flags=\"${QEMU_DEFAULT_FLAGS}\""<br />
<br />
${vm_cmd} ${qemu_flags} ${vm_cmdline} >/dev/null<br />
if [ $? -gt 0 ]; then<br />
stat_fail<br />
else<br />
stat_done<br />
fi<br />
fi<br />
done<br />
add_daemon qemu<br />
;;<br />
<br />
stop)<br />
for vm in "${QEMU_MACHINES[@]}"; do<br />
if [ "${vm}" = "${vm#!}" ]; then<br />
# check pidfile presence and permissions<br />
if [ ! -r "${PIDDIR}/${vm}.pid" ]; then<br />
continue<br />
fi<br />
<br />
stat_busy "Stopping QEMU VM: ${vm}"<br />
<br />
eval vm_haltcmd="\$qemu_${vm}_haltcmd"<br />
eval vm_haltcmd_wait="\$qemu_${vm}_haltcmd_wait"<br />
vm_haltcmd_wait=${vm_haltcmd_wait:-${QEMU_HALTCMD_WAIT}}<br />
vm_pid=$(cat ${PIDDIR}/${vm}.pid)<br />
<br />
# check process existence<br />
if ! kill -0 ${vm_pid} 2>/dev/null; then<br />
stat_done<br />
rm -f "${PIDDIR}/${vm}.pid"<br />
continue<br />
fi<br />
<br />
# Try to shutdown VM safely<br />
_vm_running='yes'<br />
if [ -n "${vm_haltcmd}" ]; then<br />
eval ${vm_haltcmd} >/dev/null<br />
<br />
_w=0<br />
while [ "${_w}" -lt "${vm_haltcmd_wait}" ]; do<br />
sleep 1<br />
if ! kill -0 ${vm_pid} 2>/dev/null; then<br />
# no such process<br />
_vm_running=''<br />
break<br />
fi<br />
_w=$((_w + 1))<br />
done<br />
<br />
else<br />
# No haltcmd - kill VM unsafely<br />
_vm_running='yes'<br />
fi<br />
<br />
if [ -n "${_vm_running}" ]; then<br />
# kill VM unsafely<br />
kill ${vm_pid} 2>/dev/null<br />
sleep 1<br />
fi<br />
<br />
# report status<br />
if kill -0 ${vm_pid} 2>/dev/null; then<br />
# VM is still alive<br />
#kill -9 ${vm_pid}<br />
stat_fail<br />
else<br />
stat_done<br />
fi<br />
<br />
# remove pidfile<br />
rm -f "${PIDDIR}/${vm}.pid"<br />
fi<br />
done<br />
rm_daemon qemu<br />
;;<br />
<br />
restart)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
<br />
*)<br />
echo "usage: $0 {start|stop|restart}"<br />
<br />
esac<br />
</nowiki>}}<br />
<br />
== Spice support ==<br />
<br />
The Spice project aims to provide a complete open source solution for interaction with virtualized desktop devices. It's main focus is to provide high-quality remote access to QEMU virtual machines. [http://spice-space.org/ Spice project homepage]<br />
<br />
The official QEMU package is built without Spice support. To build your version with Spice enabled you need to have the [[Arch Build System]] on your system.<br />
<br />
Install {{aur|spice}} from the [[Arch User Repository|AUR]] first.<br />
<br />
Then update ABS on your system to the latest version and copy {{ic|/var/abs/extra/qemu}} (for QEMU users) or {{ic|/var/abs/extra/qemu-kvm}} (for QEMU-KVM users) to somewhere (here we use {{ic|~/temp/}} as an example) you like:<br />
$ sudo abs<br />
$ cp -r /var/abs/extra/qemu ~/temp<br />
<br />
Go to your copy of the package folder (here {{ic|~/temp/qemu}} or {{ic|~/temp/qemu-kvm}}) and add {{ic|--enable-spice}} after {{ic|.configure}} in the build() function of the [[PKGBUILD]]:<br />
$ cd ~/temp/qemu<br />
$ sed -i "s/\.\/configure/& --enable-spice/g" <br />
<br />
Then build and install the package:<br />
$ makepkg -i<br />
<br />
==See also==<br />
*[http://qemu.org Official QEMU website]<br />
*[http://www.linux-kvm.org Official KVM website]<br />
*[http://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
*''[http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU]'' by AlienBOB<br />
*''[http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army]'' by Falconindy</div>Vostok4