Difference between revisions of "QEMU"

From ArchWiki
Jump to: navigation, search
m (Bridge virtual machines to external network: Use the correct style for talking about kernel modules. https://wiki.archlinux.org/index.php/Help:Style#Kernel_module_operations)
m (Enabling IOMMU (Intel VT-d/AMD-Vi) support: grammar)
 
(387 intermediate revisions by 87 users not shown)
Line 1: Line 1:
 
[[Category:Emulators]]
 
[[Category:Emulators]]
[[Category:Virtualization]]
+
[[Category:Hypervisors]]
 
[[de:Qemu]]
 
[[de:Qemu]]
[[es:Qemu]]
+
[[es:QEMU]]
 
[[fr:Qemu]]
 
[[fr:Qemu]]
 +
[[ja:QEMU]]
 
[[zh-CN:QEMU]]
 
[[zh-CN:QEMU]]
 +
{{Related articles start}}
 +
{{Related|:Category:Hypervisors}}
 +
{{Related|Libvirt}}
 +
{{Related articles end}}
  
{{Out of date|[https://www.archlinux.org/news/deprecation-of-net-tools net-tools] is deprecated. [[QEMU#Networking|Networking]] section needs updating. |QEMU#Networking}}
+
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."
  
From the [http://wiki.qemu.org/Main_Page QEMU about page],
+
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.
  
QEMU is a generic and open source machine emulator and virtualizer.
+
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.
  
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your own PC). By using dynamic translation, it achieves very good performance.
+
== Installation ==
  
When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU. QEMU supports virtualization when executing under the Xen hypervisor or using the KVM kernel module in Linux. When using KVM, QEMU can virtualize x86, server and embedded PowerPC, and S390 guests.
+
[[Install]] the {{Pkg|qemu}} package and below optional packages for your needs:
  
== Installing QEMU ==
+
* {{Pkg|qemu-arch-extra}} - extra architectures support
 +
* {{Pkg|qemu-block-gluster}} - glusterfs block support
 +
* {{Pkg|qemu-block-iscsi}} - iSCSI block support
 +
* {{Pkg|qemu-block-rbd}} - RBD block support
 +
* {{Pkg|samba}} - SMB/CIFS server support
  
Depending on your needs, you can choose to install either {{Pkg|qemu}} or {{Pkg|qemu-kvm}} from the [extra] repository.  {{Pkg|qemu}} includes support for emulating a wide variety of machine architectures, while {{Pkg|qemu-kvm}} only supports virtualizing your host architecture using [[KVM]].  It is strongly recommended to use KVM whenever possible. 
+
== Graphical front-ends for QEMU ==
  
In the current version of QEMU (>= 0.15.0), you can use KVM with the {{Pkg|qemu}} package, if supported by your processor and kernel, provided that you start QEMU with the {{ic|-enable-kvm}} argument; this was not the case for older versions of QEMU (< 0.15.0), when not all KVM-related functions had been merged into upstream QEMU.
+
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s). However, there are several GUI front-ends for QEMU:
  
== Creating a hard disk image==
+
* {{Pkg|qemu-launcher}}
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image).  A hard disk image is a file which stores the contents of the emulated hard disk. 
+
* {{Pkg|qtemu}}
 +
* {{AUR|aqemu}}
  
A hard disk image may simply contain the literal contents, byte for byte, of the hard disk.  This is usually called ''raw'' format, and it provides the least I/O overhead, although the images may take up a large amount of space.
+
Additional front-ends with QEMU support are available for [[libvirt]].
  
Alternatively, the hard disk image can be in a format such as ''qcow2'' that can save enormous amounts of space by only allocating space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk.  The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system.
+
== Creating new virtualized system ==
  
QEMU provides the {{ic|qemu-img}} command to create hard disk images.  The following command creates a 4GB image named {{ic|myimage.qcow2}} in the qcow2 format:
+
=== Creating a hard disk image ===
$ qemu-img create -f qcow2 myimage.qcow2 4G
+
  
You may use {{ic|-f raw}} to create a raw disk instead, although you can also do so simply by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.
+
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}
  
== Preparing the installation media ==
+
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.
  
To install an operating system into your disk image, you need the installation media (e.g. CD-ROM, floppy, or ISO image) for the operating system.
+
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.
  
{{Tip|If you would like to run an Arch Linux virtual machine, you can install it using the [https://archlinux.org/download/ official installation media for Arch Linux].  It is also possible to set up an Arch Linux virtual machine without the installation media, provided that your host machine is running Arch Linux, although this is more difficult; it is detailed [[Creating Arch Linux disk image#Install Arch Linux in a disk image without the installation media|here]].}}
+
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. Using this format instead of ''raw'' will likely affect performance.
  
The installation media should not be mounted because QEMU accesses the media directly.  Also, if using physical media (e.g. CD-ROM or floppy), it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file).  For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command:
+
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:
# dd if=/dev/cdrom of=mycdimg.iso
+
  
Do the same for floppies:
+
  $ qemu-img create -f raw ''image_file'' 4G
  # dd if=/dev/fd of=myfloppy.img
+
  
== Installing the operating system==
+
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.
  
To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.
+
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}
  
This is the first time you will need to start the emulator.  By default, QEMU will show the virtual machine's video output in a window.
+
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-On-Write_.28CoW.29|Copy-on-Write]] for the directory before creating any images.}}
One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it press {{Keypress|Ctrl+Alt}}.
+
  
{{Warning|QEMU should never be run as root.  If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}
+
==== Overlay storage images ====
  
=== Standard method (software emulation)===
+
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.
  
On i386 systems, to install from a bootable ISO file as CD-ROM, run QEMU with:
+
To create an overlay image, issue a command like:
$ qemu -cdrom <iso_image> -boot d <qemu_image>
+
  
On x86_64 systems:
+
  $ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''
  $ qemu-system-x86_64 -cdrom <iso_image> -boot d <qemu_image>
+
  
See the parameters in {{ic|qemu --help}} for loading other media types such as floppy or disk images, or physical drives.
+
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):
  
After the operating system has finished installing, the QEMU image can be booted directly, for example on i386:
+
$ qemu-system-i386 ''img1.cow''
  
$ qemu <qemu_image>
+
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.
  
{{Tip|By default only 128MB of memory is assigned to the machine, the amount of memory can be adjusted with the -m switch, for example {{ic|-m 512}}.}}
+
When the path to the backing image changes, repair is required.
  
=== KVM method (hardware virtualization) ===
+
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}
  
KVM, short for Kernel-based Virtual Machine, is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It relies on the kernel modules {{ic|kvm}} and either {{ic|kvm-intel}} or {{ic|kvm-amd}}. KVM interfaces via {{ic|/dev/kvm}}, which requires users to be part of the {{ic|kvm}} group.  
+
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:
  
When using the {{Pkg|qemu-kvm}} package the command for all architectures is:
+
  $ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''
  $ qemu
+
  
The command to use with the standard {{Pkg|qemu}} package is:
+
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:
$ qemu -enable-kvm
+
  
There is a dedicated [[KVM]] wiki page with more detailed information and instructions.
+
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''
  
{{Note|See [[#Windows-specific notes]] if you are installing Windows in your virtual machine.}}
+
==== Resizing an image ====
  
{{Note|If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{Keypress|Ctrl-Alt-2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine.  Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{Keypress|Ctrl-Alt-1}} to go back to the virtual machine.}}
+
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. For full explanation and workaround see [http://tjworld.net/wiki/Howto/ResizeQemuDiskImages].}}
  
== Overlay images ==
+
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:
  
A good idea is to use overlay images. This way you can a create hard disk image once and tell QEMU to store changes in an external file.
+
$ qemu-img resize ''disk_image'' +10G
This makes it easy to revert the virtual machine's disk to a previous state.
+
  
To create an overlay image, type:
+
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss!
$<nowiki> qemu-img create -b [[base_image]] -f qcow2 [[overlay_image]]</nowiki>
+
  
After that you can run qemu with:
+
=== Preparing the installation media ===
$ qemu [overlay_image]
+
  
or if you are on a x86_64 system:
+
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.
$ qemu-system-x86_64 [overlay_image]
+
  
and the original image will be left untouched. One hitch, the base image cannot be renamed or moved, the overlay remembers the base's full path.
+
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso''}}}}
 +
 
 +
=== Installing the operating system===
 +
 
 +
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.
 +
 
 +
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:
 +
 
 +
$ qemu-system-i386 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw
 +
 
 +
See {{ic|qemu(1)}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.
 +
 
 +
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).
 +
 
 +
{{Warning|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}
 +
 
 +
{{Tip|
 +
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.
 +
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}
 +
 
 +
== Running virtualized system ==
 +
 
 +
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:
 +
 
 +
$ qemu-system-i386 ''options'' ''disk_image''
 +
 
 +
Options are the same for all {{ic|qemu-system-*}} binaries, see {{ic|qemu(1)}} for documentation of all options.
 +
 
 +
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt}}.
 +
 
 +
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}
 +
 
 +
=== Enabling KVM ===
 +
 
 +
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.
 +
 
 +
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the QEMU [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor] using {{ic|Ctrl+Alt+Shift+2}}, and type {{ic|info kvm}}.
 +
 
 +
{{Note|
 +
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
 +
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.
 +
}}
 +
 
 +
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===
 +
Using IOMMU opens to features like PCI passthrough and memory protection from faulty or malicious devices, see [[wikipedia:Input-output memory management unit#Advantages]] and [https://www.quora.com/Memory-Management-computer-programming/Could-you-explain-IOMMU-in-plain-English Memory Management (computer programming): Could you explain IOMMU in plain English?].
 +
 
 +
To enable IOMMU:
 +
#Ensure that AMD-Vi/Intel VT-d is supported by the CPU and is enabled in the BIOS settings.
 +
#Add {{ic|1=intel_iommu=on}} if you have an Intel CPU or {{ic|1=amd_iommu=on}} if you have an AMD CPU, to the [[kernel parameters]].
 +
#Reboot and ensure IOMMU is enabled by checking {{ic|dmesg}} for {{ic|DMAR}}: {{ic|[0.000000] DMAR: IOMMU enabled}}
 +
#Add {{ic|1=iommu=on}} or {{ic|1=q35,iommu=on}} depending on the {{ic|-machine}}, as ''option''.
  
 
== Moving data between host and guest OS ==
 
== Moving data between host and guest OS ==
Line 107: Line 155:
 
=== Network ===
 
=== Network ===
  
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[Samba|SMB]], NBD, HTTP, [[Very Secure FTP Daemon|FTP]], or [[Secure Shell|SSH]], provided that you have set up the network appropriately and enabled the appropriate services.
+
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.
  
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[Samba|SMB]] or [[NFS]], or you can access the host's HTTP server, etc.  
+
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.
 
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).
 
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).
  
 
=== QEMU's built-in SMB server ===
 
=== QEMU's built-in SMB server ===
  
{{Note|QEMU's "built-in" SMB server is currently (as of qemu-1.0.1-1) broken because it does not specify the {{ic|state_directory}} option in the {{ic|smb.conf}} file it writes.  This issue is fixed in upstream QEMU.}}
+
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located at {{ic|/tmp/qemu-smb.''pid''-0/smb.conf}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.
 
+
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated configuration file and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this isn't necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.
+
  
 
To enable this feature, start QEMU with a command like:
 
To enable this feature, start QEMU with a command like:
$ qemu [hd_image] -net nic -net user,smb=/path/to/shared/dir
 
  
where {{ic|/path/to/shared/dir}} is a directory that you want to share between the guest and host.
+
$ qemu-system-i386 ''disk_image'' -net nic -net user,smb=''shared_dir_path''
  
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.
+
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.
 +
 
 +
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.
 +
 
 +
{{Note|
 +
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.
 +
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.
 +
}}
  
 
=== Mounting a partition inside a raw disk image ===
 
=== Mounting a partition inside a raw disk image ===
  
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.
+
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.
  
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise data corruption could occur, unless you had mounted the partitions read-only.}}
+
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}
  
 
==== With manually specifying byte offset ====
 
==== With manually specifying byte offset ====
  
 
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:
 
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:
# mount -o loop,offset=32256 [hd_image] [tmp_dir]
 
  
The {{ic|<nowiki>offset=32256</nowiki>}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end.  This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.
+
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''
  
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l [hd_image]}} to see the partitions in the image.   fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.
+
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.
 +
 
 +
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.
  
 
==== With loop module autodetecting partitions ====
 
==== With loop module autodetecting partitions ====
  
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:
+
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:
  
 
* Get rid of all your loopback devices (unmount all mounted images, etc.).
 
* Get rid of all your loopback devices (unmount all mounted images, etc.).
* Unload the loop [[Kernel modules|module]].
+
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.
# modprobe -r loop
+
* Load the loop [[Kernel modules|module]] with the {{ic|max_part}} parameter set.
+
# modprobe loop max_part=15
+
  
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|<nowiki>max_part=15</nowiki>}} every time, or you can put {{ic|<nowiki>loop.max_part=15</nowiki>}} on the kernel command line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}
+
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}
  
 
Set up your image as a loopback device:
 
Set up your image as a loopback device:
# losetup -f [os_image]
 
  
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:
+
# losetup -f -P ''disk_image''
  # mount /dev/loop0p1 [tmp_dir]
+
 
 +
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:
 +
 
 +
  # mount /dev/loop0p1 ''mountpoint''
  
 
==== With kpartx ====
 
==== With kpartx ====
  
'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:
+
'''kpartx''' from the {{AUR|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:
  # kpartx -a /dev/loop0
+
  # kpartx -a ''disk_image''
  
=== Mounting qcow2 image ===
+
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.
You may mount a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].
+
  
=== Using any real partition as the single primary partition of a hard disk image ===
+
=== Mounting a partition inside a qcow2 image ===
  
Sometimes, you may wish to use one of your system partitions from within QEMU.    Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the filesystem layer on the physical host. Such a partition also provides a way to share data between the host and guest.
+
You may mount a partition inside a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].
  
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group.  If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.
+
=== Using any real partition as the single primary partition of a hard disk image ===
  
{{Warning|Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.}}
+
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.
  
{{Warning|You must not mount a filesystem on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.}}
+
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.
 +
 
 +
{{Warning|
 +
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
 +
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.
 +
}}
  
 
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.
 
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.
  
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a filesystem and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[Kernels|kernel]] and [[initramfs|initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].
+
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initramfs|initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].
  
 
==== By specifying kernel and initrd manually ====
 
==== By specifying kernel and initrd manually ====
  
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root filesystem as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:
+
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:
$ qemu -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3
+
  
In the above example, the physical partition being used for the guest's root filesystem is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.
+
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}
 +
 
 +
$ qemu-system-i386 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3
 +
 
 +
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.
  
 
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.
 
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.
 +
 +
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:
 +
 +
... -append 'root=/dev/sda1 console=ttyS0'
  
 
==== Simulate virtual disk with MBR using linear RAID ====
 
==== Simulate virtual disk with MBR using linear RAID ====
  
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a filesystem and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.
+
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.
 +
 
 +
You can do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.
  
You can do this using software [[RAID]] in linear mode (you need the linear.ko kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.
+
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:
  
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some filesystem on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:
+
  $ dd if=/dev/zero of=''/path/to/mbr'' count=32
  $ dd if=/dev/zero of=/path/to/mbr count=32
+
  
 
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:
 
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:
# losetup -f /path/to/mbr
 
  
Let's assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:
+
# losetup -f ''/path/to/mbr''
  # modprobe linear
+
 
  # mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN
+
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:
 +
 
 +
# modprobe linear
 +
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''
 +
 
 +
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:
  
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hdaN}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:
 
 
  # fdisk /dev/md0
 
  # fdisk /dev/md0
  
Press {{Keypress|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.
+
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.
  
Now, press {{Keypress|R}} to return to the main menu.  
+
Now, press {{ic|R}} to return to the main menu.
  
Press {{Keypress|P}} and check that the cylinder size is now 16k.
+
Press {{ic|P}} and check that the cylinder size is now 16k.
  
Now, create a single primary partition corresponding to {{ic|/dev/hdaN}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.
+
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.
  
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:  
+
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:
  
  $ qemu -hdc /dev/md0 [...]
+
  $ qemu-system-i386 -hdc /dev/md0 ''[...]''
  
You can of course safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hdaN}} partition contains the necessary tools.
+
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.
  
==Networking==
+
== Networking ==
===User-mode networking===
+
  
By default, without any {{ic|-net}} arguments, QEMU will use user-mode networking with a built-in DHCP server.  Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU. This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. 
+
{{Poor writing|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}
  
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will  virtual machines be able to talk to each other if you start up more than one concurrently.
+
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.
  
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, or attaching guests to virtual LANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.
+
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.
  
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.
+
=== Link-level address caveat ===
 +
 
 +
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.
 +
 
 +
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:
 +
 
 +
$ qemu-system-i386 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''
 +
 
 +
Generating unique link-level addresses can be done in several ways:
 +
 
 +
<ol>
 +
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
 +
</li>
 +
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:
 +
 
 +
{{bc|1=
 +
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
 +
qemu-system-i386 -net nic,macaddr="$macaddr" -net vde ''disk_image''
 +
}}
 +
 
 +
</li>
 +
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.
 +
 
 +
{{hc|qemu-mac-hasher.py|<nowiki>
 +
#!/usr/bin/env python
 +
 
 +
import sys
 +
import zlib
 +
 
 +
if len(sys.argv) != 2:
 +
    print("usage: %s <VM Name>" % sys.argv[0])
 +
    sys.exit(1)
 +
 
 +
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff
 +
crc = str(hex(crc))[2:]
 +
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))
 +
</nowiki>}}
 +
 
 +
In a script, you can use for example:
 +
 
 +
vm_name="''VM Name''"
 +
qemu-system-i386 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''
 +
</li>
 +
</ol>
 +
 
 +
=== User-mode networking ===
 +
 
 +
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.
 +
 
 +
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity.}}
 +
 
 +
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.
 +
 
 +
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.
 +
 
 +
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.
  
 
=== Tap networking with QEMU ===
 
=== Tap networking with QEMU ===
==== Basic idea ====
 
  
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a "tap" interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.
+
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.
  
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.
+
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.
  
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as eth0. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.
+
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.
  
==== Bridge virtual machines to external network ====
+
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}
  
The following describes how to bridge a virtual machine to a host interface such as eth0, which is probably the most common configurationThis configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.
+
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode.  If the guest OS supports virtio network driver, then the networking performance will be increased considerably as wellSupposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:
  
{{Warning|Beware that since your virtual machines will appear directly on the external network, this may expose them to attack. Depending on what resources your virtual machines have access to, you may need to take all the precautions you normally would take in securing a computer to secure your virtual machines.}}
+
  -net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
  
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it. See http://en.gentoo-wiki.com/wiki/KVM#Networking_2 .
+
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:
  
* Make sure that the following packages are installed:
+
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no,vhost=on
**{{Pkg|bridge-utils}} (provides {{ic|brctl}}, to manipulate bridges)
+
 
**{{Pkg|uml_utilities}} (provides {{ic|tunctl}}, to manipulate taps)
+
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.
 +
 
 +
==== Host-only networking ====
 +
 
 +
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].
 +
 
 +
{{Tip|
 +
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.
 +
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:
 +
# ip addr add 172.20.0.1/16 dev br0
 +
# ip link set br0 up
 +
# dnsmasq --interface&#61;br0 --bind-interfaces --dhcp-range&#61;172.20.0.2,172.20.255.254
 +
}}
 +
 
 +
==== Internal networking ====
 +
 
 +
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.
 +
 
 +
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:
 +
 
 +
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
 +
 
 +
==== Bridged networking using qemu-bridge-helper ====
 +
 
 +
{{Out of date|The /etc files are missing as of April 2016, see {{Bug|46791}}.|section=Qemu-bridge-helper broken QENU 2.5.0-1}}
 +
 
 +
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}
 +
 
 +
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.
 +
 
 +
{{Tip|See [[Network bridge]] for information on creating bridge.}}
 +
 
 +
First, copy {{ic|/etc/qemu/bridge.conf.sample}} to {{ic|/etc/qemu/bridge.conf}}. Now modify {{ic|/etc/qemu/bridge.conf}} to contain the names of all bridges to be used by QEMU:
 +
 
 +
{{hc|/etc/qemu/bridge.conf|
 +
allow ''bridge0''
 +
allow ''bridge1''
 +
...}}
 +
 
 +
Now start the VM. The most basic usage would be:
 +
 
 +
$ qemu-system-i386 -net nic -net bridge,br=''bridge0'' ''[...]''
 +
 
 +
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:
 +
 
 +
$ qemu-system-i386 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''
 +
 
 +
==== Creating bridge manually ====
 +
 
 +
{{Poor writing|This section needs serious cleanup and may contain out-of-date information.}}
 +
 
 +
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}
 +
 
 +
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.
 +
 
 +
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.
 +
 
 +
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.
  
 
* Enable IPv4 forwarding:
 
* Enable IPv4 forwarding:
{{bc|<nowiki>
+
# sysctl net.ipv4.ip_forward=1
sysctl net.ipv4.ip_forward=1
+
 
</nowiki>}}
+
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.
To make the change permanent, change {{ic|<nowiki>net.ipv4.ip_forward = 0</nowiki>}} to {{ic|<nowiki>net.ipv4.ip_forward = 1</nowiki>}} in {{ic|<nowiki>/etc/sysctl.conf</nowiki>}}.
+
  
 
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.
 
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.
  
* Now create the bridge. See [[Bridge with netcfg]] for details.
+
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.
Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.
+
  
* Create the script that QEMU uses to bring up the tap adapter with root:kvm 750 permissions:
+
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:
 
{{hc|/etc/qemu-ifup|<nowiki>
 
{{hc|/etc/qemu-ifup|<nowiki>
 
#!/bin/sh
 
#!/bin/sh
 
+
 
 
echo "Executing /etc/qemu-ifup"
 
echo "Executing /etc/qemu-ifup"
 
echo "Bringing up $1 for bridged mode..."
 
echo "Bringing up $1 for bridged mode..."
sudo /sbin/ip link set $1 up promisc on
+
sudo /usr/bin/ip link set $1 up promisc on
 
echo "Adding $1 to br0..."
 
echo "Adding $1 to br0..."
sudo /usr/sbin/brctl addif br0 $1
+
sudo /usr/bin/brctl addif br0 $1
 
sleep 2
 
sleep 2
 
</nowiki>}}
 
</nowiki>}}
  
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with root:kvm 750 permissions:
+
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:
 
{{hc|/etc/qemu-ifdown|<nowiki>
 
{{hc|/etc/qemu-ifdown|<nowiki>
 
#!/bin/sh
 
#!/bin/sh
+
 
 
echo "Executing /etc/qemu-ifdown"
 
echo "Executing /etc/qemu-ifdown"
sudo /sbin/ip link set $1 down
+
sudo /usr/bin/ip link set $1 down
sudo /usr/sbin/brctl delif br0 $1
+
sudo /usr/bin/brctl delif br0 $1
sudo /sbin/ip link delete dev $1
+
sudo /usr/bin/ip link delete dev $1
 
</nowiki>}}
 
</nowiki>}}
 +
 
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:
 
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:
 
{{bc|<nowiki>
 
{{bc|<nowiki>
Cmnd_Alias      QEMU=/sbin/ip,/sbin/modprobe,/usr/sbin/brctl,/usr/bin/tunctl
+
Cmnd_Alias      QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl
 
%kvm    ALL=NOPASSWD: QEMU
 
%kvm    ALL=NOPASSWD: QEMU
 
</nowiki>}}
 
</nowiki>}}
* Make sure the user(s) wishing to use this new functionality are in the {{ic|kvm}} group. Exit and log in again if necessary.
 
  
 
* You launch QEMU using the following {{ic|run-qemu}} script:
 
* You launch QEMU using the following {{ic|run-qemu}} script:
 
{{hc|run-qemu|<nowiki>
 
{{hc|run-qemu|<nowiki>
 
#!/bin/bash
 
#!/bin/bash
USERID=`whoami`
+
USERID=$(whoami)
IFACE=$(sudo tunctl -b -u $USERID)
+
  
# This line creates a random mac address. The downside is the dhcp server will assign a different ip each time
+
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079
 +
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
 +
sudo /usr/bin/ip tuntap add user $USERID mode tap
 +
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
 +
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))
 +
 
 +
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time
 
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
 
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
# Instead, uncomment and edit this line to set an static mac address. The benefit is that the dhcp server will assign the same ip.
+
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.
 
# macaddr='52:54:be:36:42:a9'
 
# macaddr='52:54:be:36:42:a9'
 
+
 
qemu-kvm -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*
+
qemu-system-i386 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*
 
+
 
sudo tunctl -d $IFACE &> /dev/null
+
sudo ip link set dev $IFACE down &> /dev/null
 +
sudo ip tuntap del $IFACE mode tap &> /dev/null
 
</nowiki>}}
 
</nowiki>}}
 +
 
Then to launch a VM, do something like this
 
Then to launch a VM, do something like this
{{bc|
+
$ run-qemu -hda ''myvm.img'' -m 512 -vga std
$ run-qemu -hda myvm.img -m 512 -vga std
+
}}
+
  
* If you cannot get a DHCP address in the host, it might be because [[Iptables|iptables]] are up by default in the bridge. In that case (from http://www.linux-kvm.org/page/Networking ):
+
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:
# cd /proc/sys/net/bridge
+
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki>
# ls
+
net.bridge.bridge-nf-call-ip6tables = 0
bridge-nf-call-arptables  bridge-nf-call-iptables
+
net.bridge.bridge-nf-call-iptables = 0
bridge-nf-call-ip6tables  bridge-nf-filter-vlan-tagged
+
net.bridge.bridge-nf-call-arptables = 0
# for f in bridge-nf-*; do echo 0 > $f; done
+
</nowiki>}}
 +
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.
  
And if you still cannot get networking to work, see: [[Linux_Containers#Bridge_device_setup]].
+
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Loading]].
  
==== Host-only networking ====
+
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:
 +
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
  
If the bridge is given an IP address and traffic destined for it is allowed, but no "real" interface (e.g. eth0) is also connected to the bridge, then the virtual machines will be able to talk to each other and the physical host.  However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host.  This configuration is called "host-only" networking by other virtualization software such as [[VirtualBox]].
+
==== Network sharing between physical device and a Tap device through iptables ====
  
You may want to have a DHCP server running on the bridge interface to service the virtual network.  For example, to use the 172.20.0.1/16 subnet with [[Dnsmasq]] as the DHCP server:
+
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}
  
# ip addr add 172.20.0.1/16 dev br0
+
Bridged networking works fine between a wired interface (Eg. eth0), and it's easy to setupHowever if the host gets connected to the network through a wireless device, then bridging is not possible.
  # ip link set br0 up
+
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254
+
  
==== Internal networking ====
+
See [[Network bridge#Wireless interface on a bridge]] as a reference.
  
If you do not give the bridge an IP address and add an [[Iptables|iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network.  This configuration is called "internal" networking by other virtualization software such as [[VirtualBox]].  You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.
+
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.
  
==== Link-level address caveat ====
+
See [[Internet sharing]] as a reference.
  
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap deviceOtherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address.  This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.
+
There you can find what's needed to share the network between devices, included tap and tun onesThe following just hints further on some of the host configurations requiredAs indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway.  The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.
  
To solve this problem, the last 8 digits of the link-level address of the virtual NICs should be randomized, as in the script above, to make sure that each virtual machine has a unique link-level address.
+
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside /etc/sysctl.d:
 +
 
 +
net.ipv4.ip_forward = 1
 +
net.ipv6.conf.default.forwarding = 1
 +
net.ipv6.conf.all.forwarding = 1
 +
 
 +
The iptables rules can look like:
 +
 
 +
# Forwarding from/to outside
 +
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT
 +
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT
 +
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT
 +
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT
 +
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT
 +
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT
 +
# NAT/Masquerade (network address translation)
 +
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE
 +
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE
 +
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE
 +
 
 +
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:
 +
 
 +
INT=tap0
 +
EXT_0=eth0
 +
EXT_1=wlan0
 +
EXT_2=tun0
 +
 
 +
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.
 +
 
 +
The forwarding rules shown are stateless, and for pure forwarding.  One could think of restricting specific traffic, putting a firewall in place to protect the guest and others.  However those would decrease the networking performance, while a simple bridge doesn't include any of that.
 +
 
 +
Bonus:  Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest.  This avoids the need for the guest to also open a VPN connection.  Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.
  
 
=== Networking with VDE2 ===
 
=== Networking with VDE2 ===
 +
 +
{{Poor writing|This section needs serious cleanup and may contain out-of-date information.}}
 +
 
==== What is VDE? ====
 
==== What is VDE? ====
 +
 
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.
 
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.
  
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. Your are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].
+
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].
  
 
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.
 
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.
  
 
==== Basics ====
 
==== Basics ====
VDE is in the [[Official Repositories|official repositories]], so:
 
  
# pacman -S vde2
+
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package in the [[official repositories]].
  
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (or add it to your {{ic|MODULES}} array in {{ic|[[rc.conf]]}}):
+
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):
  
 
  # modprobe tun
 
  # modprobe tun
Line 361: Line 575:
 
Now create the virtual switch:
 
Now create the virtual switch:
  
  # vde_switch -tap tap0 -daemon -mod 660 -group kvm
+
  # vde_switch -tap tap0 -daemon -mod 660 -group users
  
This line creates the switch, creates tap0, "plugs" it, and allows the users of the group {{ic|kvm}} to use it.
+
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.
  
The interface is plugged in but not configured yet. Just do it:
+
The interface is plugged in but not configured yet. To configure it, run this command:
  
 
  # ip addr add 192.168.100.254/24 dev tap0
 
  # ip addr add 192.168.100.254/24 dev tap0
  
That is all! Now, you just have to run KVM with these {{ic|-net}} options as a normal user:
+
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:
  
  $ qemu-kvm -net nic -net vde -hda ...
+
  $ qemu-system-i386 -net nic -net vde -hda ''[...]''
  
Configure your guest as you would do in a physical network. We gave them static addresses and let them access the WAN using IP forwarding and masquerading on our host:
+
Configure networking for your guest as you would do in a physical network.
  
# echo "1" > /proc/sys/net/ipv4/ip_forward
+
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}
# iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE
+
  
 
==== Startup scripts ====
 
==== Startup scripts ====
  
===== Systemd =====
+
Example of main script starting VDE:
  
Example of script to put in /usr/lib/systemd/scripts/
+
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki>
 
+
{{hc|/usr/lib/systemd/scripts/qemu-network-env|<nowiki>
+
 
#!/bin/sh
 
#!/bin/sh
 
# QEMU/VDE network environment preparation script
 
# QEMU/VDE network environment preparation script
Line 395: Line 606:
 
TAP_MASK=24
 
TAP_MASK=24
 
TAP_NETWORK=192.168.100.0
 
TAP_NETWORK=192.168.100.0
 
  
 
# Host interface
 
# Host interface
Line 404: Line 614:
 
         echo -n "Starting VDE network for QEMU: "
 
         echo -n "Starting VDE network for QEMU: "
  
         # If you want tun kernel module to be loaded by script uncomment here  
+
         # If you want tun kernel module to be loaded by script uncomment here
 
#modprobe tun 2>/dev/null
 
#modprobe tun 2>/dev/null
 
## Wait for the module to be loaded
 
## Wait for the module to be loaded
  #while ! lsmod |grep -q "^tun"; do echo Waiting for tun device;sleep 1; done
+
  #while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done
  
 
         # Start tap switch
 
         # Start tap switch
         vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group kvm
+
         vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users
  
 
         # Bring tap interface up
 
         # Bring tap interface up
Line 429: Line 639:
  
 
         # Kill VDE switch
 
         # Kill VDE switch
         pgrep -f vde_switch | xargs kill -TERM  
+
         pgrep -f vde_switch | xargs kill -TERM
 
         ;;
 
         ;;
 
   restart|reload)
 
   restart|reload)
Line 443: Line 653:
 
</nowiki>}}
 
</nowiki>}}
  
Example of systemd service to put in /usr/lib/systemd/system/
+
Example of systemd service using the above script:
  
{{hc|/usr/lib/systemd/system/qemu-network-env.service|<nowiki>
+
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki>
 
[Unit]
 
[Unit]
 
Description=Manage VDE Switch
 
Description=Manage VDE Switch
Line 451: Line 661:
 
[Service]
 
[Service]
 
Type=oneshot
 
Type=oneshot
ExecStart=/usr/lib/systemd/scripts/qemu-network-env start
+
ExecStart=/etc/systemd/scripts/qemu-network-env start
ExecStop=/usr/lib/systemd/scripts/qemu-network-env stop
+
ExecStop=/etc/systemd/scripts/qemu-network-env stop
 
RemainAfterExit=yes
 
RemainAfterExit=yes
  
Line 459: Line 669:
 
</nowiki>}}
 
</nowiki>}}
  
After that you can enable the service if you want to start this at boot time
+
Change permissions for {{ic|qemu-network-env}} to be executable
{{bc|<nowiki># systemctl enable qemu-network-env.service</nowiki>}}
+
  
If you want to start it (you can replace start by stop or restart)
+
# chmod u+x /etc/systemd/scripts/qemu-network-env
{{bc|<nowiki># systemctl start qemu-network-env.service</nowiki>}}
+
  
===== Old style rc =====
+
You can [[start]] {{ic|qemu-network-env.service}} as usual.
  
I added this init script to run all this at start-up:
+
====Alternative method====
  
{{hc|/etc/rc.d/qemu-network-env.sh|<nowiki>
+
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.
#!/bin/bash                                                                                       
+
+
. /etc/rc.conf
+
. /etc/rc.d/functions
+
case "$1" in
+
  start)
+
    stat_busy "Starting VDE Switch"
+
    vde_switch -tap tap0 -daemon -mod 660 -pidfile $PIDFILE -group kvm
+
    if [ $? -gt 0 ]; then
+
      stat_fail
+
    else
+
        echo "1" > /proc/sys/net/ipv4/ip_forward &&  \
+
        iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE &&  \
+
        ifconfig tap0 192.168.100.254 netmask 255.255.255.0 && \
+
        stat_done || stat_fail
+
    fi
+
    ;;
+
  stop)
+
    stat_busy "Stopping VDE Switch"
+
    # err.. well, i should remove the switch here...
+
    stat_done
+
    ;;
+
  restart)
+
    $0 stop
+
    sleep 1
+
    # Aem.. As long as stop) is not implemented, this just fails
+
    $0 start
+
    ;;
+
  *)
+
    echo "usage: $0 {start|stop|restart}" 
+
esac
+
exit 0
+
</nowiki>}}
+
  
Well, I know it is dirty and could be more configurable. Feel free to improve it. VDE has an rc script too, but I had to make one anyway for the IP forwarding stuff.
+
# vde_switch -daemon -mod 660 -group users
 +
# slirpvde --dhcp --daemon
  
====Alternative method====
+
Then, to start the VM with a connection to the network of the host:
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq and iptables you can do the following for the same result.
+
  
  # vde_switch -daemon -mod 660 -group kvm
+
  $ qemu-system-i386 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''
  
# slirpvde --dhcp --daemon
+
=== VDE2 Bridge ===
  
Then to start the vm with a connection to the network of the host:
+
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.
  
$ kvm -net nic,macaddr=52:54:00:00:EE:03 -net vde whatever.qcow
+
==== Basics ====
  
=== Improving networking performance ===
+
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.
  
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde, since tap devices and bridges are implemented in-kernel.
+
Create the vde2/tap device:
  
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. To do this, add a {{ic|<nowiki>model=virtio</nowiki>}} flag to the {{ic|-net nic}} option:
+
  # vde_switch -tap tap0 -daemon -mod 660 -group users
 +
# ip link set tap0 up
  
-net nic,model=virtio
+
Create bridge:
  
This will only work if the guest machine has a driver for virtio network devices. Linux does, and the required driver ('''virtio_net''') is included with Arch Linux, but there is no guarantee that virtio networking will work with arbitrary operating systems.  There do exist [[#Virtio drivers for Windows|virtio drivers for Windows]], but you need to install them manually.
+
# brctl addbr br0
 +
 
 +
Add devices:
 +
 
 +
# brctl addif br0 eth0
 +
# brctl addif br0 tap0
 +
 
 +
And configure bridge interface:
 +
 
 +
# dhcpcd br0
 +
 
 +
==== Startup scripts ====
 +
 
 +
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:
 +
 
 +
{{hc|/etc/netctl/ethernet-noip|<nowiki>
 +
Description='A more versatile static Ethernet connection'
 +
Interface=eth0
 +
Connection=ethernet
 +
IP=no
 +
</nowiki>}}
 +
 
 +
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.
 +
 
 +
{{hc|/etc/systemd/system/vde2@.service|<nowiki>
 +
[Unit]
 +
Description=Network Connectivity for %i
 +
Wants=network.target
 +
Before=network.target
 +
 
 +
[Service]
 +
Type=oneshot
 +
RemainAfterExit=yes
 +
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users
 +
ExecStart=/usr/bin/ip link set dev %i up
 +
ExecStop=/usr/bin/ip addr flush dev %i
 +
ExecStop=/usr/bin/ip link set dev %i down
 +
 
 +
[Install]
 +
WantedBy=multi-user.target
 +
</nowiki>}}
 +
 
 +
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].
  
 
== Graphics ==
 
== Graphics ==
QEMU can use the following different graphic outputs: std, cirrus, vmware, qxl, xenfs and vnc.
 
With the {{ic|vnc}} option you can run your guest standalone and connect to it via VNC. Other options are using {{ic|std}}, {{ic|vmware}}, {{ic|cirrus}}.
 
  
===std===
+
QEMU can use the following different graphic outputs: {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} and {{ic|none}}.
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels.
+
  
===vmware===
+
=== std ===
Although it is a bit buggy, it performs better than std and cirrus. On the guest, install the VMware drivers.  For Arch Linux guests:
+
# pacman -S xf86-video-vmware xf86-input-vmmouse
+
  
===none===
+
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.
  
If you do not want to see the graphical output from your virtual machine because you will be accessing it entirely through the network or serial port, you can run QEMU with the {{ic|-nographic}} option.
+
=== qxl ===
  
== Graphical front-ends for QEMU ==
+
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use SPICE for improved graphical performance when using QXL.
  
Unlike other virtualization progrems such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings.  All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s). However, there are several GUI front-ends for QEMU:
+
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.
  
* virt-manager (part of [[libvirt]])
+
==== SPICE ====
* {{Pkg|qemu-launcher}}
+
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.
* qemulator (AUR)
+
* {{Pkg|qtemu}}
+
  
== Windows-specific notes ==
+
SPICE can only be used when using QXL as the graphical output.
=== Choosing a Windows version ===
+
  
QEMU can run any version of Windows. However, 98, Me and XP will run at quite a low speed. You should choose either Windows 95 or Windows 2000. Surprisingly, 2000 seems to run faster than 98. The fastest one is 95, which can from time to time make you forget that you are running an emulator :)
+
The following is example of booting with SPICE as the remote desktop protocol:
  
If you own both Win95 and Win98/WinME, then 98lite (from http://www.litepc.com) might be worth trying. It decouples Internet Explorer from operating system and replaces it with original Windows 95 Explorer. It also enables you to do a minimal Windows installation, without all the bloat you normally cannot disable. This might be the best option, because you get the smallest, fastest and most stable Windows this way.
+
$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -chardev spicevm
  
It is possible to run [[Windows PE]] in QEMU.
+
Connect to the guest by using a SPICE client. At the moment {{Pkg|spice-gtk3}} is recommended, however other [http://www.spice-space.org/download.html clients], including other platforms, are available:
  
=== Windows 95 boot floppy ===
+
$ spicy -h 127.0.0.1 -p 5930
  
If you are using the Windows 95 boot floppy, choosing SAMSUNG as the type of CD-ROM seems to work.
+
For improved support for multiple monitors, clipboard sharing, etc. the following packages should be installed on the guest:
 +
* {{AUR|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more
 +
* {{AUR|xf86-video-qxl}} {{AUR|xf86-video-qxl-git}}: Xorg X11 qxl video driver
 +
* For other operating systems, see the Guest section on [http://www.spice-space.org/download.html SPICE-Space download] page.
  
=== Windows 2000 installation bug ===
+
=== vmware ===
  
There are problems when installing Windows 2000. Windows setup will generate a lot of edb*.log files, one after the other containing nothing but blank spaces in {{ic|C:\WINNT\SECURITY}} which quickly fill the virtual hard disk. A workaround is to open a Windows command prompt as early as possible during setup (by pressing {{Keypress|Shift+F10}}) which will allow you to remove these log files as they appear by typing:
+
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers ({{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.
del %windir%\security\*.log
+
  
{{Note|According to the official QEMU website, "Windows 2000 has a bug which gives a disk full problem during its installation. When installing it, use the {{ic|-win2k-hack}} QEMU option to enable a specific workaround. After Windows 2000 is installed, you no longer need this option (this option slows down the IDE transfers)."}}
+
=== virtio ===
  
=== Optimizing Windows 9X CPU usage ===
+
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests.
  
Windows 9X uses an idle loop instead of the HLT (halt) instruction. Consequently, the emulator will consume all CPU resources when running Windows 9X guests &mdash; even if no work is being done. This only applies to DOS and DOS-based Windows versions (3.X, 95/98/ME) &mdash; NT-based and later Windows versions are not affected.
+
=== cirrus ===
  
To resolve this issue, install [http://www.benchtest.com/rain.html Rain], [http://www.benchtest.com/wfp.html Waterfall] or [http://www.benchtest.com/cpuidle.html CpuIdle] in the Windows 9X guest. (Rain might be preferred because it does only what is needed &mdash; replacing the idle loop with the HLT instruction &mdash; and nothing more.)
+
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.
  
See [https://forums.virtualbox.org/viewtopic.php?f=28&t=9918 Tutorial: Windows 95/98 guest OSes] for more information.
+
=== none ===
  
===Remote Desktop Protocol===
+
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:
+
$ qemu -nographic -net user,hostfwd=tcp::5555-:3389
+
Then connect with either rdesktop or freerdp to the guest, for example:
+
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan
+
  
=== Windows virtio drivers ===
+
=== vnc ===
  
You can use [http://wiki.libvirt.org/page/Virtio virtio] devices with Windows if you install the [http://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio guest drivers] for Windows.
+
Given that you used the {{ic|-nographic}} option, you can add the {{ic|-vnc display}} option to have QEMU listen on {{ic|display}} and redirect the VGA display to the VNC session. There is an example of this in the [[#Starting QEMU virtual machines on boot]] section's example configs.
  
== General problems ==
+
$ qemu-system-i386 -vga std -nographic -vnc :0
 +
$ gvncviewer :0
  
=== Keyboard seems broken or the arrow keys do not work ===
+
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.
+
{{bc|
+
qemu -k [keymap] [disk_image]
+
}}
+
  
=== Virtual machine runs too slowly ===
+
== Installing virtio drivers ==
  
There are a number of techniques that you can use to improve the performance if your virtual machine. For example:
+
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.
  
* Use KVM if possible : add -machine=pc,accel=kvm to the qemu start command you use.
+
* A virtio block device requires the option {{Ic|-drive}} instead of the simple {{Ic|-hd*}} plus {{Ic|1=if=virtio}}:
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024MiB of memory.
+
$ qemu-system-i386 -boot order=c -drive file=''disk_image'',if=virtio
* If the host machine has multiple CPUs, assign the guest more CPUs using the {{ic|-smp}} option.
+
 
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPUIf you don't do this, it may be trying to emulate a more generic CPU.
+
{{Note|{{Ic|1=-boot order=c}} is absolutely necessary when you want to boot from it. There is no auto-detection as with {{Ic|-hd*}}.}}
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:
+
 
  $ qemu -net nic,model=virtio -net tap,if=tap0,script=no -drive file=mydisk.raw,media=disk,if=virtio
+
* Almost the same goes for the network:
* [[#Tap networking with QEMU|Use TAP devices]] instead of user-mode networking.
+
$ qemu-system-i386 -net nic,model=virtio
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's filesystem. For example, you can mount an [[Ext4|ext4 filesystem]] with the option {{ic|<nowiki>barrier=0</nowiki>}}.  You should read the documentation for any options that you change, since sometimes performance-enhancing options for filesystems come at the cost of data integrity.
+
 
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]:
+
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}
# echo 1 > /sys/kernel/mm/ksm/run
+
 
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the {{ic|-balloon virtio}} option.
+
=== Preparing an (Arch) Linux guest ===
 +
 
 +
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.
 +
 
 +
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.
 +
 
 +
{{hc|/etc/mkinitcpio.conf|2=
 +
MODULES="virtio virtio_blk virtio_pci virtio_net"}}
 +
 
 +
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.
 +
 
 +
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}
 +
 
 +
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].
 +
 
 +
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.
 +
 
 +
=== Preparing a Windows guest ===
 +
 
 +
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}
 +
 
 +
==== Block device drivers ====
 +
 
 +
===== New Install of Windows =====
 +
 
 +
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].
 +
 
 +
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{ic|man qemu-system}} for more details about applying a delay at boot.
 +
 
 +
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:
 +
 
 +
$ qemu-system-i386 ... \
 +
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \
 +
  -drive file=''/path/to/installer.iso'',index=2,media=cdrom \
 +
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \
 +
...
 +
 
 +
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).
 +
 
 +
* Select the option {{ic|Load Drivers}}.
 +
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".
 +
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
 +
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.
 +
* Click Next
 +
 
 +
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.
 +
 
 +
===== Change Existing Windows VM to use virtio =====
 +
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.
 +
 
 +
You can download the virtio disk driver from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].
 +
 
 +
Now you need to create a new disk image, which fill force Windows to search for the driver. For example:
 +
 
 +
  $ qemu-img create -f qcow2 ''fake.qcow2'' 1G
 +
 
 +
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.
 +
 
 +
$ qemu-system-i386 -m 512 -vga std -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso
 +
 
 +
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.
 +
 
 +
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:
 +
 
 +
  $ qemu-system-i386 -m 512 -vga std -drive file=''windows_disk_image'',if=virtio
 +
 
 +
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}
 +
 
 +
==== Network drivers ====
 +
 
 +
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.
 +
 
 +
  $ qemu-system-i386 -m 512 -vga std -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso
 +
 
 +
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.
 +
 
 +
=== Preparing a FreeBSD guest ===
 +
 
 +
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:
 +
 
 +
{{bc|<nowiki>
 +
virtio_loader="YES"
 +
virtio_pci_load="YES"
 +
virtio_blk_load="YES"
 +
if_vtnet_load="YES"
 +
virtio_balloon_load="YES"
 +
</nowiki>}}
 +
 
 +
Then modify your {{ic|/etc/fstab}} by doing the following:
 +
 
 +
{{bc|<nowiki>
 +
sed -i bak "s/ada/vtbd/g" /etc/fstab
 +
</nowiki>}}
 +
 
 +
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.
 +
 
 +
== Tips and tricks ==
  
==Starting QEMU virtual machines on boot==
+
=== Starting QEMU virtual machines on boot ===
  
===With libvirt===
+
==== With libvirt ====
  
 
If a virtual machine is set up with [[libvirt]], it can be configured through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".
 
If a virtual machine is set up with [[libvirt]], it can be configured through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".
  
===Custom script===
+
==== Custom script ====
To run QEMU VMs on boot, you can use following rc-script and config.
+
  
{| border="1"
+
To run QEMU VMs on boot, you can use following systemd unit and config.
|+ Config file options
+
|-
+
| QEMU_MACHINES || List of VMs to start
+
|-
+
| qemu_${vm}_type || QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM. I.e. you can boot e.g. qemu-system-arm images with qemu_my_arm_vm_type="system-arm". If not specified, {{ic|/usr/bin/qemu}} will be used.
+
|-
+
| qemu_${vm} || QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic}}.
+
|-
+
| qemu_${vm}_haltcmd || Command to shutdown VM safely. I am using {{ic|-monitor telnet:..}} and power off my VMs via ACPI by sending {{ic|system_powerdown}} to monitor. You can use ssh or some other ways.
+
|-
+
| qemu_${vm}_haltcmd_wait || How much time to wait for safe VM shutdown. Default is 30 seconds. rc-script will kill qemu process after this timeout.
+
|}
+
  
Config file example:
+
{{hc|/etc/systemd/system/qemu@.service|<nowiki>
{{hc|/etc/conf.d/qemu.conf|<nowiki>
+
[Unit]
# VMs that should be started on boot
+
Description=QEMU virtual machine
# use the ! prefix to disable starting/stopping a VM
+
 
QEMU_MACHINES=(vm1 vm2)
+
[Service]
 +
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"
 +
EnvironmentFile=/etc/conf.d/qemu.d/%i
 +
ExecStart=/usr/bin/env qemu-${type} -name %i -nographic $args
 +
ExecStop=/bin/sh -c ${haltcmd}
 +
TimeoutStopSec=30
 +
KillMode=none
 +
 
 +
[Install]
 +
WantedBy=multi-user.target
 +
</nowiki>}}
 +
 
 +
{{Note|According to {{ic|systemd.service(5)}} and {{ic|systemd.kill(5)}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.}}
 +
 
 +
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the following variables set:
 +
 
 +
; type
 +
: QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM. I.e. you can boot e.g. {{ic|qemu-system-arm}} images with {{ic|1=type="system-arm"}}.
 +
; args
 +
: QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.
 +
; haltcmd
 +
: Command to shut down a VM safely. I am using {{ic|-monitor telnet:..}} and power off my VMs via ACPI by sending {{ic|system_powerdown}} to monitor. You can use SSH or some other ways.
  
# NOTE: following options will be prepended to qemu_${vm}
+
Example configs:
# -name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic
+
  
qemu_vm1_type="system-x86_64"
+
{{hc|/etc/conf.d/qemu.d/one|<nowiki>
 +
type="system-x86_64"
  
qemu_vm1="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \
+
args="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \
 
  -net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \
 
  -net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \
 
  -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"
 
  -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"
  
qemu_vm1_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7100" # or netcat/ncat
+
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat
  
# You can use other ways to shutdown your VM correctly
+
# You can use other ways to shut down your VM correctly
#qemu_vm1_haltcmd="ssh powermanager@vm1 sudo poweroff"
+
#haltcmd="ssh powermanager@vm1 sudo poweroff"
 
+
</nowiki>}}
# By default rc-script will wait 30 seconds before killing VM. Here you can change this timeout.
+
#qemu_vm1_haltcmd_wait="30"
+
  
qemu_vm2="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \
+
{{hc|/etc/conf.d/qemu.d/two|<nowiki>
 +
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \
 
  -net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \
 
  -net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \
 
  -monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"
 
  -monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"
  
qemu_vm2_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7101"
+
haltcmd="echo 'system_powerdown' | nc localhost 7101"
 
</nowiki>}}
 
</nowiki>}}
  
rc-script:
+
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.
{{hc|/etc/rc.d/qemu|<nowiki>
+
#!/bin/bash
+
. /etc/rc.conf
+
. /etc/rc.d/functions
+
  
[ -f /etc/conf.d/qemu.conf ] && source /etc/conf.d/qemu.conf
+
=== Mouse integration ===
  
PIDDIR=/var/run/qemu
+
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the option {{ic|-usbdevice tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:
QEMU_DEFAULT_FLAGS='-name ${vm} -pidfile ${PIDDIR}/${vm}.pid -daemonize -nographic'
+
QEMU_HALTCMD_WAIT=30
+
  
case "$1" in
+
$ qemu-system-i386 -hda ''disk_image'' -m 512 -vga std -usbdevice tablet
  start)
+
    [ -d "${PIDDIR}" ] || mkdir -p "${PIDDIR}"
+
    for vm in "${QEMU_MACHINES[@]}"; do
+
      if [ "${vm}" = "${vm#!}" ]; then
+
        stat_busy "Starting QEMU VM: ${vm}"
+
        eval vm_cmdline="\$qemu_${vm}"
+
        eval vm_type="\$qemu_${vm}_type"
+
  
        if [ -n "${vm_type}" ]; then
+
If that doesn't work, try the tip at [[#Mouse cursor is jittery or erratic]].
          vm_cmd="/usr/bin/qemu-${vm_type}"
+
        else
+
          vm_cmd='/usr/bin/qemu'
+
        fi
+
  
        eval "qemu_flags=\"${QEMU_DEFAULT_FLAGS}\""
+
=== Pass-through host USB device ===
  
        ${vm_cmd} ${qemu_flags} ${vm_cmdline} >/dev/null
+
To access physical USB device connected to host from VM, you can start QEMU with following option:
        if [  $? -gt 0 ]; then
+
          stat_fail
+
        else
+
          stat_done
+
        fi
+
      fi
+
    done
+
    add_daemon qemu
+
    ;;
+
  
  stop)
+
$ qemu-system-i386 -usbdevice host:''vendor_id'':''product_id'' ''disk_image''
    for vm in "${QEMU_MACHINES[@]}"; do
+
      if [ "${vm}" = "${vm#!}" ]; then
+
        # check pidfile presence and permissions
+
        if [ ! -r "${PIDDIR}/${vm}.pid" ]; then
+
          continue
+
        fi
+
  
        stat_busy "Stopping QEMU VM: ${vm}"
+
You can find {{ic|vendor_id}} and {{ic|product_id}} of your device with {{ic|lsusb}} command.
  
        eval vm_haltcmd="\$qemu_${vm}_haltcmd"
+
{{Note|If you encounter permission errors when running QEMU, see [[Udev#Writing udev rules]] for information on how to set permissions of the device.}}
        eval vm_haltcmd_wait="\$qemu_${vm}_haltcmd_wait"
+
        vm_haltcmd_wait=${vm_haltcmd_wait:-${QEMU_HALTCMD_WAIT}}
+
        vm_pid=$(cat ${PIDDIR}/${vm}.pid)
+
 
+
        # check process existence
+
        if ! kill -0 ${vm_pid} 2>/dev/null; then
+
          stat_done
+
          rm -f "${PIDDIR}/${vm}.pid"
+
          continue
+
        fi
+
  
        # Try to shutdown VM safely
+
=== Enabling KSM ===
        _vm_running='yes'
+
        if [ -n "${vm_haltcmd}" ]; then
+
          eval ${vm_haltcmd} >/dev/null
+
  
          _w=0
+
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
          while [ "${_w}" -lt "${vm_haltcmd_wait}" ]; do
+
            sleep 1
+
            if ! kill -0 ${vm_pid} 2>/dev/null; then
+
              # no such process
+
              _vm_running=''
+
              break
+
            fi
+
            _w=$((_w + 1))
+
          done
+
  
        else
+
To enable KSM, simply run
          # No haltcmd - kill VM unsafely
+
          _vm_running='yes'
+
        fi
+
  
        if [ -n "${_vm_running}" ]; then
+
# echo 1 > /sys/kernel/mm/ksm/run
            # kill VM unsafely
+
            kill ${vm_pid} 2>/dev/null
+
            sleep 1
+
        fi
+
  
        # report status
+
To make it permanent, you can use [[systemd#Temporary files|systemd's temporary files]]:
        if kill -0 ${vm_pid} 2>/dev/null; then
+
          # VM is still alive
+
          #kill -9 ${vm_pid}
+
          stat_fail
+
        else
+
          stat_done
+
        fi
+
  
        # remove pidfile
+
{{hc|/etc/tmpfiles.d/ksm.conf|
        rm -f "${PIDDIR}/${vm}.pid"
+
w /sys/kernel/mm/ksm/run - - - - 1
      fi
+
}}
    done
+
    rm_daemon qemu
+
    ;;
+
  
  restart)
+
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.
    $0 stop
+
    sleep 1
+
    $0 start
+
    ;;
+
  
  *)
+
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}
    echo "usage: $0 {start|stop|restart}"
+
  
esac
+
=== Multi-monitor support ===
</nowiki>}}
+
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.
 +
 
 +
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.
 +
 
 +
=== Copy and paste ===
 +
 
 +
To have copy and paste between the host and the guest you need to enable the spice agent communication channel. It requires to add a virtio-serial device to the guest, and open a port for the spice vdagent. It is also required to install the spice vdagent in guest ({{AUR|spice-vdagent}} for Arch guests, [http://www.spice-space.org/download.html Windows guest tools] for Windows guests). Make sure the agent is running (and for future, started automatically).
 +
 
 +
Start QEMU with the following options:
 +
 
 +
$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent
 +
 
 +
The {{ic|-device virtio-serial-pci}} option adds the virtio-serial device, {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in that device and {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port.
 +
 
 +
It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.
 +
 
 +
=== Windows-specific notes ===
 +
 
 +
QEMU can run any version of Windows from Windows 95 through Windows 10.
 +
 
 +
It is possible to run [[Windows PE]] in QEMU.
 +
 
 +
==== Fast startup ====
 +
 
 +
For Windows 8 (or later) guests it is better to disable "Fast Startup" from the Power Options of the Control Panel, as it causes the guest to hang during every other boot.
 +
 
 +
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.
 +
 
 +
==== Remote Desktop Protocol ====
 +
 
 +
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:
 +
 
 +
$ qemu-system-i386 -nographic -net user,hostfwd=tcp::5555-:3389
 +
 
 +
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:
 +
 
 +
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan
 +
 
 +
== Troubleshooting ==
 +
 
 +
=== Mouse cursor is jittery or erratic ===
 +
 
 +
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:
 +
 
 +
$ export SDL_VIDEO_X11_DGAMOUSE=0
 +
 
 +
If this helps, you can add this to your {{ic|~/.bashrc}} file.
 +
 
 +
=== No visible Cursor ===
 +
 
 +
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.
 +
 
 +
=== Keyboard seems broken or the arrow keys do not work ===
 +
 
 +
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.
 +
 
 +
$ qemu-system-i386 -k ''keymap'' ''disk_image''
 +
 
 +
=== Virtual machine runs too slowly ===
 +
 
 +
There are a number of techniques that you can use to improve the performance if your virtual machine. For example:
 +
 
 +
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.
 +
* If the host machine has multiple CPUs, assign the guest more CPUs using the {{ic|-smp}} option.
 +
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.
 +
* Use KVM if possible: add {{ic|1=-machine type=pc,accel=kvm}} to the QEMU start command you use.
 +
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:
 +
$ qemu-system-i386 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio
 +
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].
 +
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.
 +
* If you have a raw disk image, you may want to disable the cache:
 +
$ qemu-system-i386 -drive file=''disk_image'',if=virtio,cache=none
 +
* Use the native Linux AIO:
 +
$ qemu-system-i386 -drive file=''disk_image'',if=virtio,aio=native
 +
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]:
 +
# echo 1 > /sys/kernel/mm/ksm/run
 +
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the {{ic|-balloon virtio}} option.
 +
 
 +
See http://www.linux-kvm.org/page/Tuning_KVM for more information.
 +
 
 +
=== Guest display stretches on window resize ===
 +
 
 +
To restore default window size, press {{ic|Ctrl+Alt+u}}.
 +
 
 +
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===
 +
 
 +
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:
 +
 
 +
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
 +
failed to initialize KVM: Device or resource busy
 +
 
 +
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.
 +
 
 +
=== libgfapi error message ===
 +
 
 +
The error message displayed at startup:
 +
 
 +
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory
  
== Spice support ==
+
is not a problem, it just means that you are lacking the optional GlusterFS dependency.
  
The Spice project aims to provide a complete open source solution for interaction with virtualized desktop devices. Its main focus is to provide high-quality remote access to QEMU virtual machines. [http://spice-space.org/ Spice project homepage]
+
=== Kernel panic on LIVE-environments===
  
The official QEMU package is built without Spice support. To build your version with Spice enabled you need to have the [[Arch Build System]] on your system.
+
If you start a live-environment (or better: booting a system) you may encounter this:
  
Install {{aur|spice}} from the [[Arch User Repository|AUR]] first.
+
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)
  
Then update ABS on your system to the latest version and copy {{ic|/var/abs/extra/qemu}} (for QEMU users) or {{ic|/var/abs/extra/qemu-kvm}} (for QEMU-KVM users) to somewhere (here we use {{ic|~/temp/}} as an example) you like:
+
or some other boot hindering process (e.g. can't unpack initramfs, cant start service foo).
$ sudo abs
+
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.
$ cp -r /var/abs/extra/qemu ~/temp
+
  
Go to your copy of the package folder (here {{ic|~/temp/qemu}} or {{ic|~/temp/qemu-kvm}}) and add {{ic|--enable-spice}} after {{ic|.configure}} in the build() function of the [[PKGBUILD]]:
 
$ cd ~/temp/qemu
 
$ sed -i "s/\.\/configure/& --enable-spice/g" 
 
  
Then build and install the package:
+
== See also ==
$ makepkg -i
+
  
==See also==
+
* [http://qemu.org Official QEMU website]
*[http://qemu.org Official QEMU website]
+
* [http://www.linux-kvm.org Official KVM website]
*[http://www.linux-kvm.org Official KVM website]
+
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]
*[http://en.wikibooks.org/wiki/QEMU QEMU Wikibook]
+
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]
*''[http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU]'' by AlienBOB
+
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)
*''[http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army]'' by Falconindy
+
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy
 +
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]
 +
* [http://qemu.weilnetz.de/ QEMU on Windows]
 +
* [[wikipedia:Qemu|Wikipedia]]
 +
* [https://wiki.debian.org/QEMU QEMU - Debian Wiki]
 +
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]
 +
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]
 +
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]
 +
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]

Latest revision as of 06:59, 2 May 2016

Related articles

According to the QEMU about page, "QEMU is a generic and open source machine emulator and virtualizer."

When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.

QEMU can use other hypervisors like Xen or KVM to use CPU extensions (HVM) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.

Contents

Installation

Install the qemu package and below optional packages for your needs:

Graphical front-ends for QEMU

Unlike other virtualization programs such as VirtualBox and VMware, QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s). However, there are several GUI front-ends for QEMU:

Additional front-ends with QEMU support are available for libvirt.

Creating new virtualized system

Creating a hard disk image

Tip: See the QEMU Wikibook for more information on QEMU images.

To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.

A hard disk image can be raw, so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.

Alternatively, the hard disk image can be in a format such as qcow2 which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. Using this format instead of raw will likely affect performance.

QEMU provides the qemu-img command to create hard disk images. For example to create a 4 GB image in the raw format:

$ qemu-img create -f raw image_file 4G

You may use -f qcow2 to create a qcow2 disk instead.

Note: You can also simply create a raw image by creating a file of the needed size using dd or fallocate.
Warning: If you store the hard disk images on a Btrfs file system, you should consider disabling Copy-on-Write for the directory before creating any images.

Overlay storage images

You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.

To create an overlay image, issue a command like:

$ qemu-img create -o backing_file=img1.raw,backing_fmt=raw -f qcow2 img1.cow

After that you can run your QEMU VM as usual (see #Running virtualized system):

$ qemu-system-i386 img1.cow

The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.

When the path to the backing image changes, repair is required.

Warning: The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.

Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:

$ qemu-img rebase -b /new/img1.raw /new/img1.cow

At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:

$ qemu-img rebase -u -b /new/img1.raw /new/img1.cow

Resizing an image

Warning: Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. For full explanation and workaround see [1].

The qemu-img executable has the resize option, which enables easy resizing of a hard drive image. It works for both raw and qcow2. For example, to increase image space by 10 GB, run:

$ qemu-img resize disk_image +10G

After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must first reduce the allocated file systems and partition sizes using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss!

Preparing the installation media

To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.

Tip: If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named /dev/cdrom, you can dump it to a file with the command:
$ dd if=/dev/cdrom of=cd_image.iso

Installing the operating system

This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.

For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:

$ qemu-system-i386 -cdrom iso_image -boot order=d -drive file=disk_image,format=raw

See qemu(1) for more information about loading other media types (such as floppy, disk images or physical drives) and #Running virtualized system for other useful options.

After the operating system has finished installing, the QEMU image can be booted directly (see #Running virtualized system).

Warning: By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the -m switch, for example -m 512M or -m 2G.
Tip:
  • Instead of specifying -boot order=x, some users may feel more comfortable using a boot menu: -boot menu=on, at least during configuration and experimentation.
  • If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press Ctrl+Alt+2 in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type info block to see the block devices, and use the change command to swap out a device. Press Ctrl+Alt+1 to go back to the virtual machine.

Running virtualized system

qemu-system-* binaries (for example qemu-system-i386 or qemu-system-x86_64, depending on guest's architecture) are used to run the virtualized guest. The usage is:

$ qemu-system-i386 options disk_image

Options are the same for all qemu-system-* binaries, see qemu(1) for documentation of all options.

By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press Ctrl+Alt.

Warning: QEMU should never be run as root. If you must launch it in a script as root, you should use the -runas option to make QEMU drop root privileges.

Enabling KVM

KVM must be supported by your processor and kernel, and necessary kernel modules must be loaded. See KVM for more information.

To start QEMU in KVM mode, append -enable-kvm to the additional start options. To check if KVM is enabled for a running VM, enter the QEMU Monitor using Ctrl+Alt+Shift+2, and type info kvm.

Note:
  • If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
  • KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a blue screen.

Enabling IOMMU (Intel VT-d/AMD-Vi) support

Using IOMMU opens to features like PCI passthrough and memory protection from faulty or malicious devices, see wikipedia:Input-output memory management unit#Advantages and Memory Management (computer programming): Could you explain IOMMU in plain English?.

To enable IOMMU:

  1. Ensure that AMD-Vi/Intel VT-d is supported by the CPU and is enabled in the BIOS settings.
  2. Add intel_iommu=on if you have an Intel CPU or amd_iommu=on if you have an AMD CPU, to the kernel parameters.
  3. Reboot and ensure IOMMU is enabled by checking dmesg for DMAR: [0.000000] DMAR: IOMMU enabled
  4. Add iommu=on or q35,iommu=on depending on the -machine, as option.

Moving data between host and guest OS

Network

Data can be shared between the host and guest OS using any network protocol that can transfer files, such as NFS, SMB, NBD, HTTP, FTP, or SSH, provided that you have set up the network appropriately and enabled the appropriate services.

The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via SMB or NFS, or you can access the host's HTTP server, etc. It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see #Tap networking with QEMU).

QEMU's built-in SMB server

QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up Samba with an automatically generated smb.conf file located at /tmp/qemu-smb.pid-0/smb.conf and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal Samba service on the host if you have set up shares on it.

To enable this feature, start QEMU with a command like:

$ qemu-system-i386 disk_image -net nic -net user,smb=shared_dir_path

where shared_dir_path is a directory that you want to share between the guest and host.

Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to \\10.0.2.4\qemu.

Note:
  • If you are using sharing options multiple times like -net user,smb=shared_dir_path1 -net user,smb=shared_dir_path2 or -net user,smb=shared_dir_path1,smb=shared_dir_path2 then it will share only the last defined one.
  • If you cannot access the shared folder and the guest system is Windows, check that the NetBIOS protocol is enabled and that a firewall does not block ports used by the NetBIOS protocol.

Mounting a partition inside a raw disk image

When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using qemu-nbd.

Warning: You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.

With manually specifying byte offset

One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:

# mount -o loop,offset=32256 disk_image mountpoint

The offset=32256 option is actually passed to the losetup program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the sizelimit option to specify the exact size of the partition, but this is usually unnecessary.

Depending on your disk image, the needed partition may not start at offset 32256. Run fdisk -l disk_image to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to mount.

With loop module autodetecting partitions

The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:

  • Get rid of all your loopback devices (unmount all mounted images, etc.).
  • Unload the loop kernel module, and load it with the max_part=15 parameter set. Additionally, the maximum number of loop devices can be controlled with the max_loop parameter.
Tip: You can put an entry in /etc/modprobe.d to load the loop module with max_part=15 every time, or you can put loop.max_part=15 on the kernel command-line, depending on whether you have the loop.ko module built into your kernel or not.

Set up your image as a loopback device:

# losetup -f -P disk_image

Then, if the device created was /dev/loop0, additional devices /dev/loop0pX will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:

# mount /dev/loop0p1 mountpoint

With kpartx

kpartx from the multipath-toolsAUR package can read a partition table on a device and create a new device for each partition. For example:

# kpartx -a disk_image

This will setup the loopback device and create the necessary partition(s) device(s) in /dev/mapper/.

Mounting a partition inside a qcow2 image

You may mount a partition inside a qcow2 image using qemu-nbd. See Wikibooks.

Using any real partition as the single primary partition of a hard disk image

Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.

In Arch Linux, device files for raw partitions are, by default, owned by root and the disk group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.

Warning:
  • Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
  • You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.

After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.

However, things are a little more complicated if you want to have the entire virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the kernel and initrd manually, or by simulating a disk with a MBR by using linear RAID.

By specifying kernel and initrd manually

QEMU supports loading Linux kernels and init ramdisks directly, thereby circumventing bootloaders such as GRUB. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:

Note: In this example, it is the host's images that are being used, not the guest's. If you wish to use the guest's images, either mount /dev/sda3 read-only (to protect the file system from the host) and specify the /full/path/to/images or use some kexec hackery in the guest to reload the guest's kernel (extends boot time).
$ qemu-system-i386 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3

In the above example, the physical partition being used for the guest's root file system is /dev/sda3 on the host, but it shows up as /dev/sda on the guest.

You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.

When there are multiple kernel parameters to be passed to the -append option, they need to be quoted using single or double quotes. For example:

... -append 'root=/dev/sda1 console=ttyS0'

Simulate virtual disk with MBR using linear RAID

A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.

You can do this using software RAID in linear mode (you need the linear.ko kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.

Suppose you have a plain, unmounted /dev/hdaN partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:

$ dd if=/dev/zero of=/path/to/mbr count=32

Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:

# losetup -f /path/to/mbr

Let us assume the resulting device is /dev/loop0, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + /dev/hdaN disk image using software RAID:

# modprobe linear
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN

The resulting /dev/md0 is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of /dev/hdaN inside /dev/md0 (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using fdisk on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:

# fdisk /dev/md0

Press X to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.

Now, press R to return to the main menu.

Press P and check that the cylinder size is now 16k.

Now, create a single primary partition corresponding to /dev/hdaN. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.

Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:

$ qemu-system-i386 -hdc /dev/md0 [...]

You can, of course, safely set any bootloader on this disk image using QEMU, provided the original /dev/hdaN partition contains the necessary tools.

Networking

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: Network topologies (sections #Host-only networking, #Internal networking and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as #User-mode networking, #Tap networking with QEMU, #Networking with VDE2. (Discuss in Talk:QEMU#)

The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.

In addition, networking performance can be improved by assigning virtual machines a virtio network device rather than the default emulation of an e1000 NIC. See #Installing virtio drivers for more information.

Link-level address caveat

By giving the -net nic argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.

Make sure that each virtual machine has a unique link-level address, but it should always start with 52:54:. Use the following option, replace X with arbitrary hexadecimal digit:

$ qemu-system-i386 -net nic,macaddr=52:54:XX:XX:XX:XX -net vde disk_image

Generating unique link-level addresses can be done in several ways:

  1. Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
  2. Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a macaddr variable:
    printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
    qemu-system-i386 -net nic,macaddr="$macaddr" -net vde disk_image
  3. Use the following script qemu-mac-hasher.py to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.
    qemu-mac-hasher.py
    #!/usr/bin/env python
    
    import sys
    import zlib
    
    if len(sys.argv) != 2:
        print("usage: %s <VM Name>" % sys.argv[0])
        sys.exit(1)
    
    crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff
    crc = str(hex(crc))[2:]
    print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))
    

    In a script, you can use for example:

    vm_name="VM Name"
    qemu-system-i386 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde disk_image
    

User-mode networking

By default, without any -netdev arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.

Warning: This only works with the TCP and UDP protocols, so ICMP, including ping, will not work. Do not use ping to test network connectivity.

This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.

QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the -net user flag for more details.

However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.

Tap networking with QEMU

Tap devices are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.

QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.

Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as eth0. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.

Warning: If you bridge together tap device and some host interface, such as eth0, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the precautions you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use host-only networking and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.

As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:

-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no

But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:

-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no,vhost=on

See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.

Host-only networking

If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. eth0) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called host-only networking by other virtualization software such as VirtualBox.

Tip:
  • If you want to set up IP masquerading, e.g. NAT for virtual machines, see the Internet sharing#Enable NAT page.
  • You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the 172.20.0.1/16 subnet with dnsmasq as the DHCP server:
# ip addr add 172.20.0.1/16 dev br0
# ip link set br0 up
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254

Internal networking

If you do not give the bridge an IP address and add an iptables rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called internal networking by other virtualization software such as VirtualBox. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.

By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:

# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

Bridged networking using qemu-bridge-helper

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: The /etc files are missing as of April 2016, see FS#46791. (Discuss in Talk:QEMU#Qemu-bridge-helper broken QENU 2.5.0-1)
Note: This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.

This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses /usr/lib/qemu/qemu-bridge-helper binary, which allows creating tap devices on an existing bridge.

Tip: See Network bridge for information on creating bridge.

First, copy /etc/qemu/bridge.conf.sample to /etc/qemu/bridge.conf. Now modify /etc/qemu/bridge.conf to contain the names of all bridges to be used by QEMU:

/etc/qemu/bridge.conf
allow bridge0
allow bridge1
...

Now start the VM. The most basic usage would be:

$ qemu-system-i386 -net nic -net bridge,br=bridge0 [...]

With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:

$ qemu-system-i386 -net nic -net bridge,br=bridge0 -net nic,vlan=1 -net bridge,vlan=1,br=bridge1 [...]

Creating bridge manually

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: This section needs serious cleanup and may contain out-of-date information. (Discuss in Talk:QEMU#)
Tip: Since QEMU 1.1, the network bridge helper can set tun/tap up for you without the need for additional scripting. See #Bridged networking using qemu-bridge-helper.

The following describes how to bridge a virtual machine to a host interface such as eth0, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.

We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.

  • Install bridge-utils, which provides brctl to manipulate bridges.
  • Enable IPv4 forwarding:
# sysctl net.ipv4.ip_forward=1

To make the change permanent, change net.ipv4.ip_forward = 0 to net.ipv4.ip_forward = 1 in /etc/sysctl.d/99-sysctl.conf.

  • Load the tun module and configure it to be loaded on boot. See Kernel modules for details.
  • Now create the bridge. See Bridge with netctl for details. Remember to name your bridge as br0, or change the scripts below to your bridge's name.
  • Create the script that QEMU uses to bring up the tap adapter with root:kvm 750 permissions:
/etc/qemu-ifup
#!/bin/sh

echo "Executing /etc/qemu-ifup"
echo "Bringing up $1 for bridged mode..."
sudo /usr/bin/ip link set $1 up promisc on
echo "Adding $1 to br0..."
sudo /usr/bin/brctl addif br0 $1
sleep 2
  • Create the script that QEMU uses to bring down the tap adapter in /etc/qemu-ifdown with root:kvm 750 permissions:
/etc/qemu-ifdown
#!/bin/sh

echo "Executing /etc/qemu-ifdown"
sudo /usr/bin/ip link set $1 down
sudo /usr/bin/brctl delif br0 $1
sudo /usr/bin/ip link delete dev $1
  • Use visudo to add the following to your sudoers file:
Cmnd_Alias      QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl
%kvm     ALL=NOPASSWD: QEMU
  • You launch QEMU using the following run-qemu script:
run-qemu
#!/bin/bash
USERID=$(whoami)

# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
sudo /usr/bin/ip tuntap add user $USERID mode tap
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))

# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.
# macaddr='52:54:be:36:42:a9'

qemu-system-i386 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*

sudo ip link set dev $IFACE down &> /dev/null
sudo ip tuntap del $IFACE mode tap &> /dev/null

Then to launch a VM, do something like this

$ run-qemu -hda myvm.img -m 512 -vga std
/etc/sysctl.d/10-disable-firewall-on-bridge.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

Run sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf to apply the changes immediately.

See the libvirt wiki and Fedora bug 512206. If you get errors by sysctl during boot about non-existing files, make the bridge module load at boot. See Kernel modules#Loading.

Alternatively, you can configure iptables to allow all traffic to be forwarded across the bridge by adding a rule like this:

-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

Network sharing between physical device and a Tap device through iptables

Merge-arrows-2.pngThis article or section is a candidate for merging with Internet_sharing.Merge-arrows-2.png

Notes: Duplication, not specific to QEMU. (Discuss in Talk:QEMU#)

Bridged networking works fine between a wired interface (Eg. eth0), and it's easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.

See Network bridge#Wireless interface on a bridge as a reference.

One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.

See Internet sharing as a reference.

There you can find what's needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.

To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside /etc/sysctl.d:

net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1

The iptables rules can look like:

# Forwarding from/to outside
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT
# NAT/Masquerade (network address translation)
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE

The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:

INT=tap0
EXT_0=eth0
EXT_1=wlan0
EXT_2=tun0

The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.

The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge doesn't include any of that.

Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.

Networking with VDE2

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: This section needs serious cleanup and may contain out-of-date information. (Discuss in Talk:QEMU#)

What is VDE?

VDE stands for Virtual Distributed Ethernet. It started as an enhancement of uml_switch. It is a toolbox to manage virtual networks.

The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read the documentation of the project.

The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.

Basics

VDE support can be installed via the vde2 package in the official repositories.

In our config, we use tun/tap to create a virtual interface on my host. Load the tun module (see Kernel modules for details):

# modprobe tun

Now create the virtual switch:

# vde_switch -tap tap0 -daemon -mod 660 -group users

This line creates the switch, creates tap0, "plugs" it, and allows the users of the group users to use it.

The interface is plugged in but not configured yet. To configure it, run this command:

# ip addr add 192.168.100.254/24 dev tap0

Now, you just have to run KVM with these -net options as a normal user:

$ qemu-system-i386 -net nic -net vde -hda [...]

Configure networking for your guest as you would do in a physical network.

Tip: You might want to set up NAT on tap device to access the internet from the virtual machine. See Internet sharing#Enable NAT for more information.

Startup scripts

Example of main script starting VDE:

/etc/systemd/scripts/qemu-network-env
#!/bin/sh
# QEMU/VDE network environment preparation script

# The IP configuration for the tap device that will be used for
# the virtual machine network:

TAP_DEV=tap0
TAP_IP=192.168.100.254
TAP_MASK=24
TAP_NETWORK=192.168.100.0

# Host interface
NIC=eth0

case "$1" in
  start)
        echo -n "Starting VDE network for QEMU: "

        # If you want tun kernel module to be loaded by script uncomment here
	#modprobe tun 2>/dev/null
	## Wait for the module to be loaded
 	#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done

        # Start tap switch
        vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users

        # Bring tap interface up
        ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"
        ip link set "$TAP_DEV" up

        # Start IP Forwarding
        echo "1" > /proc/sys/net/ipv4/ip_forward
        iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE
        ;;
  stop)
        echo -n "Stopping VDE network for QEMU: "
        # Delete the NAT rules
        iptables -t nat -D POSTROUTING "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE

        # Bring tap interface down
        ip link set "$TAP_DEV" down

        # Kill VDE switch
        pgrep -f vde_switch | xargs kill -TERM
        ;;
  restart|reload)
        $0 stop
        sleep 1
        $0 start
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|reload}"
        exit 1
esac
exit 0

Example of systemd service using the above script:

/etc/systemd/system/qemu-network-env.service
[Unit]
Description=Manage VDE Switch

[Service]
Type=oneshot
ExecStart=/etc/systemd/scripts/qemu-network-env start
ExecStop=/etc/systemd/scripts/qemu-network-env stop
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Change permissions for qemu-network-env to be executable

# chmod u+x /etc/systemd/scripts/qemu-network-env

You can start qemu-network-env.service as usual.

Alternative method

If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.

# vde_switch -daemon -mod 660 -group users
# slirpvde --dhcp --daemon

Then, to start the VM with a connection to the network of the host:

$ qemu-system-i386 -net nic,macaddr=52:54:00:00:EE:03 -net vde disk_image

VDE2 Bridge

Based on quickhowto: qemu networking using vde, tun/tap, and bridge graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.

Basics

Remember that you need tun module and bridge-utils package.

Create the vde2/tap device:

# vde_switch -tap tap0 -daemon -mod 660 -group users
# ip link set tap0 up

Create bridge:

# brctl addbr br0

Add devices:

# brctl addif br0 eth0
# brctl addif br0 tap0

And configure bridge interface:

# dhcpcd br0

Startup scripts

All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. eth0), this can be done with netctl using a custom Ethernet profile with:

/etc/netctl/ethernet-noip
Description='A more versatile static Ethernet connection'
Interface=eth0
Connection=ethernet
IP=no

The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the users user group.

/etc/systemd/system/vde2@.service
[Unit]
Description=Network Connectivity for %i
Wants=network.target
Before=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users
ExecStart=/usr/bin/ip link set dev %i up
ExecStop=/usr/bin/ip addr flush dev %i
ExecStop=/usr/bin/ip link set dev %i down

[Install]
WantedBy=multi-user.target

And finally, you can create the bridge interface with netctl.

Graphics

QEMU can use the following different graphic outputs: std, qxl, vmware, virtio, cirrus and none.

std

With -vga std you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.

qxl

QXL is a paravirtual graphics driver with 2D support. To use it, pass the -vga qxl option and install drivers in the guest. You may want to use SPICE for improved graphical performance when using QXL.

On Linux guests, the qxl and bochs_drm kernel modules must be loaded in order to gain a decent performance.

SPICE

The SPICE project aims to provide a complete open source solution for remote access to virtual machines in a seamless way.

SPICE can only be used when using QXL as the graphical output.

The following is example of booting with SPICE as the remote desktop protocol:

$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -chardev spicevm 

Connect to the guest by using a SPICE client. At the moment spice-gtk3 is recommended, however other clients, including other platforms, are available:

$ spicy -h 127.0.0.1 -p 5930

For improved support for multiple monitors, clipboard sharing, etc. the following packages should be installed on the guest:

vmware

Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers (xf86-video-vmware and xf86-input-vmmouse for Arch Linux guests.

virtio

virtio-vga / virtio-gpu is a paravirtual 3D graphics driver based on virgl. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests.

cirrus

The cirrus graphical adapter was the default before 2.2. It should not be used on modern systems.

none

This is like a PC that has no VGA card at all. You would not even be able to access it with the -vnc option. Also, this is different from the -nographic option which lets QEMU emulate a VGA card, but disables the SDL display.

vnc

Given that you used the -nographic option, you can add the -vnc display option to have QEMU listen on display and redirect the VGA display to the VNC session. There is an example of this in the #Starting QEMU virtual machines on boot section's example configs.

$ qemu-system-i386 -vga std -nographic -vnc :0
$ gvncviewer :0

When using VNC, you might experience keyboard problems described (in gory details) here. The solution is not to use the -k option on QEMU, and to use gvncviewer from gtk-vnc. See also this message posted on libvirt's mailing list.

Installing virtio drivers

QEMU offers guests the ability to use paravirtualized block and network devices using the virtio drivers, which provide better performance and lower overhead.

  • A virtio block device requires the option -drive instead of the simple -hd* plus if=virtio:
$ qemu-system-i386 -boot order=c -drive file=disk_image,if=virtio
Note: -boot order=c is absolutely necessary when you want to boot from it. There is no auto-detection as with -hd*.
  • Almost the same goes for the network:
$ qemu-system-i386 -net nic,model=virtio
Note: This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.

Preparing an (Arch) Linux guest

To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: virtio, virtio_pci, virtio_blk, virtio_net, and virtio_ring. For 32-bit guests, the specific "virtio" module is not necessary.

If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by mkinitcpio's autodetect hook. Otherwise use the MODULES array in /etc/mkinitcpio.conf to include the necessary modules and rebuild the initial ramdisk.

/etc/mkinitcpio.conf
MODULES="virtio virtio_blk virtio_pci virtio_net"

Virtio disks are recognized with the prefix v (e.g. vda, vdb, etc.); therefore, changes must be made in at least /etc/fstab and /boot/grub/grub.cfg when booting from a virtio disk.

Tip: When referencing disks by UUID in both /etc/fstab and bootloader, nothing has to be done.

Further information on paravirtualization with KVM can be found here.

You might also want to install qemu-guest-agent to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the qemu-ga.service.

Preparing a Windows guest

Note: The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [2]. After the install, you may revert to other cpu settings (8/8/2015).

Block device drivers

New Install of Windows

Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the Fedora repository.

The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See man qemu-system for more details about applying a delay at boot.

The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:

$ qemu-system-i386 ... \
-drive file=/path/to/primary/disk.img,index=0,media=disk,if=virtio \
-drive file=/path/to/installer.iso,index=2,media=cdrom \
-drive file=/path/to/virtio.iso,index=3,media=cdrom \
...

During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).

  • Select the option Load Drivers.
  • Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".
  • Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
  • Now browse to E:\viostor\[your-os]\amd64, select it, and press OK.
  • Click Next

You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.

Change Existing Windows VM to use virtio

Modifying an existing Windows guest for booting from virtio disk is a bit tricky.

You can download the virtio disk driver from the Fedora repository.

Now you need to create a new disk image, which fill force Windows to search for the driver. For example:

$ qemu-img create -f qcow2 fake.qcow2 1G

Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.

$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=ide -drive file=fake.qcow2,if=virtio -cdrom virtio-win-0.1-81.iso

Windows will detect the fake disk and try to find a driver for it. If it fails, go to the Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click Update driver and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.

When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:

$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=virtio
Note: If you encounter the Blue Screen of Death, make sure you did not forget the -m parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.

Network drivers

Installing virtio network drivers is a bit easier, simply add the -net argument as explained above.

$ qemu-system-i386 -m 512 -vga std -drive file=windows_disk_image,if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso

Windows will detect the network adapter and try to find a driver for it. If it fails, go to the Device Manager, locate the network adapter with an exclamation mark icon (should be open), click Update driver and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.

Preparing a FreeBSD guest

Install the emulators/virtio-kmod port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your /boot/loader.conf file:

virtio_loader="YES"
virtio_pci_load="YES"
virtio_blk_load="YES"
if_vtnet_load="YES"
virtio_balloon_load="YES"

Then modify your /etc/fstab by doing the following:

sed -i bak "s/ada/vtbd/g" /etc/fstab

And verify that /etc/fstab is consistent. If anything goes wrong, just boot into a rescue CD and copy /etc/fstab.bak back to /etc/fstab.

Tips and tricks

Starting QEMU virtual machines on boot

With libvirt

If a virtual machine is set up with libvirt, it can be configured through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".

Custom script

To run QEMU VMs on boot, you can use following systemd unit and config.

/etc/systemd/system/qemu@.service
[Unit]
Description=QEMU virtual machine

[Service]
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"
EnvironmentFile=/etc/conf.d/qemu.d/%i
ExecStart=/usr/bin/env qemu-${type} -name %i -nographic $args
ExecStop=/bin/sh -c ${haltcmd}
TimeoutStopSec=30
KillMode=none

[Install]
WantedBy=multi-user.target
Note: According to systemd.service(5) and systemd.kill(5) man pages it is necessary to use the KillMode=none option. Otherwise the main qemu process will be killed immediately after the ExecStop command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.

Then create per-VM configuration files, named /etc/conf.d/qemu.d/vm_name, with the following variables set:

type
QEMU binary to call. If specified, will be prepended with /usr/bin/qemu- and that binary will be used to start the VM. I.e. you can boot e.g. qemu-system-arm images with type="system-arm".
args
QEMU command line to start with. Will always be prepended with -name ${vm} -nographic.
haltcmd
Command to shut down a VM safely. I am using -monitor telnet:.. and power off my VMs via ACPI by sending system_powerdown to monitor. You can use SSH or some other ways.

Example configs:

/etc/conf.d/qemu.d/one
type="system-x86_64"

args="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \
 -net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \
 -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"

haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat

# You can use other ways to shut down your VM correctly
#haltcmd="ssh powermanager@vm1 sudo poweroff"
/etc/conf.d/qemu.d/two
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \
 -net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \
 -monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"

haltcmd="echo 'system_powerdown' | nc localhost 7101"

To set which virtual machines will start on boot-up, enable the qemu@vm_name.service systemd unit.

Mouse integration

To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the option -usbdevice tablet. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:

$ qemu-system-i386 -hda disk_image -m 512 -vga std -usbdevice tablet

If that doesn't work, try the tip at #Mouse cursor is jittery or erratic.

Pass-through host USB device

To access physical USB device connected to host from VM, you can start QEMU with following option:

$ qemu-system-i386 -usbdevice host:vendor_id:product_id disk_image

You can find vendor_id and product_id of your device with lsusb command.

Note: If you encounter permission errors when running QEMU, see Udev#Writing udev rules for information on how to set permissions of the device.

Enabling KSM

Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.

To enable KSM, simply run

# echo 1 > /sys/kernel/mm/ksm/run

To make it permanent, you can use systemd's temporary files:

/etc/tmpfiles.d/ksm.conf
w /sys/kernel/mm/ksm/run - - - - 1

If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then /sys/kernel/mm/ksm/pages_shared should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.

Tip: An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory:
$ grep . /sys/kernel/mm/ksm/*

Multi-monitor support

The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the qxl.heads=N kernel parameter.

The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing -vga qxl by -vga none -device qxl-vga,vgamem_mb=32. If you ever increase vgamem_mb beyond 64M, then you also have to increase the vram_size_mb option.

Copy and paste

To have copy and paste between the host and the guest you need to enable the spice agent communication channel. It requires to add a virtio-serial device to the guest, and open a port for the spice vdagent. It is also required to install the spice vdagent in guest (spice-vdagentAUR for Arch guests, Windows guest tools for Windows guests). Make sure the agent is running (and for future, started automatically).

Start QEMU with the following options:

$ qemu-system-i386 -vga qxl -spice port=5930,disable-ticketing -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent

The -device virtio-serial-pci option adds the virtio-serial device, -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 opens a port for spice vdagent in that device and -chardev spicevmc,id=spicechannel0,name=vdagent adds a spicevmc chardev for that port.

It is important that the chardev= option of the virtserialport device matches the id= option given to the chardev option (spicechannel0 in this example). It is also important that the port name is com.redhat.spice.0, because that is the namespace where vdagent is looking for in the guest. And finally, specify name=vdagent so that spice knows what this channel is for.

Windows-specific notes

QEMU can run any version of Windows from Windows 95 through Windows 10.

It is possible to run Windows PE in QEMU.

Fast startup

For Windows 8 (or later) guests it is better to disable "Fast Startup" from the Power Options of the Control Panel, as it causes the guest to hang during every other boot.

Fast Startup may also need to be disabled for changes to the -smp option to be properly applied.

Remote Desktop Protocol

If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:

$ qemu-system-i386 -nographic -net user,hostfwd=tcp::5555-:3389

Then connect with either rdesktop or freerdp to the guest. For example:

$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan

Troubleshooting

Mouse cursor is jittery or erratic

If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:

$ export SDL_VIDEO_X11_DGAMOUSE=0

If this helps, you can add this to your ~/.bashrc file.

No visible Cursor

Add -show-cursor to QEMU's options to see a mouse cursor.

Keyboard seems broken or the arrow keys do not work

Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in /usr/share/qemu/keymaps.

$ qemu-system-i386 -k keymap disk_image

Virtual machine runs too slowly

There are a number of techniques that you can use to improve the performance if your virtual machine. For example:

  • Use the -cpu host option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.
  • If the host machine has multiple CPUs, assign the guest more CPUs using the -smp option.
  • Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the -m option to assign more memory. For example, -m 1024 runs a virtual machine with 1024 MiB of memory.
  • Use KVM if possible: add -machine type=pc,accel=kvm to the QEMU start command you use.
  • If supported by drivers in the guest operating system, use virtio for network and/or block devices. For example:
$ qemu-system-i386 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=disk_image,media=disk,if=virtio
  • Use TAP devices instead of user-mode networking. See #Tap networking with QEMU.
  • If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an ext4 file system with the option barrier=0. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.
  • If you have a raw disk image, you may want to disable the cache:
$ qemu-system-i386 -drive file=disk_image,if=virtio,cache=none
  • Use the native Linux AIO:
$ qemu-system-i386 -drive file=disk_image,if=virtio,aio=native
  • If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling kernel same-page merging:
# echo 1 > /sys/kernel/mm/ksm/run
  • In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU with the -balloon virtio option.

See http://www.linux-kvm.org/page/Tuning_KVM for more information.

Guest display stretches on window resize

To restore default window size, press Ctrl+Alt+u.

ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy

If an error message like this is printed when starting QEMU with -enable-kvm option:

ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
failed to initialize KVM: Device or resource busy

that means another hypervisor is currently running. It is not recommended or possible to run several hypervisors in parallel.

libgfapi error message

The error message displayed at startup:

Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory

is not a problem, it just means that you are lacking the optional GlusterFS dependency.

Kernel panic on LIVE-environments

If you start a live-environment (or better: booting a system) you may encounter this:

[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)

or some other boot hindering process (e.g. can't unpack initramfs, cant start service foo). Try starting the VM with the -m VALUE switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.


See also