https://wiki.archlinux.org/api.php?action=feedcontributions&user=AndrewXanadu&feedformat=atomArchWiki - User contributions [en]2024-03-29T05:50:55ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=QEMU&diff=592177QEMU2019-12-19T14:01:49Z<p>AndrewXanadu: Corrected a typo</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:Qemu]]<br />
[[es:QEMU]]<br />
[[fr:Qemu]]<br />
[[ja:QEMU]]<br />
[[ru:QEMU]]<br />
[[zh-hans:QEMU]]<br />
[[zh-hant:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-arch-extra}} - extra architectures support<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|qemu-block-rbd}} - RBD block support <br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
Other GUI front-ends for QEMU:<br />
<br />
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is <br />
explicitly told to preallocate. See man qemu-img in section Notes.}} <br />
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss! For a Windows guest, open the "create and format hard disk partitions" control panel.<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the [[#QEMU monitor]] using {{ic|Ctrl+Alt+2}}, and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} option.<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35 -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.<br />
}}<br />
<br />
== Sharing data between host and guest ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to a SSH-server running on the guest.<br />
<br />
For example, to bind port 10022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=''tcp::10022-:22''<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@localhost -p10022<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located in {{ic|/tmp/qemu-smb.''random_string''}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal [[Samba]] service on the host, which the guest can also access if you have set up shares on it.<br />
<br />
Only a single directory can be set as shared with the option {{ic|1=smb=}}, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
}}<br />
<br />
One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:<br />
<br />
#!/bin/bash<br />
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')<br />
echo "[global]<br />
allow insecure wide links = yes<br />
[qemu]<br />
follow symlinks = yes<br />
wide links = yes<br />
acl allow execute always = yes" >> $conf<br />
# in case the change is not detected automatically:<br />
smbcontrol --configfile=$conf $pid reload-config<br />
<br />
This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:<br />
<br />
echo "[''myshare'']<br />
path=''another_path''<br />
read only=no<br />
guest ok=yes<br />
force user=''username''" >> $conf<br />
<br />
This share will be available on the guest as {{ic|\\10.0.2.4\''myshare''}}.<br />
<br />
=== Using filesystem passthrough and VirtFS ===<br />
<br />
See the [https://wiki.qemu.org/Documentation/9psetup QEMU documentation].<br />
<br />
=== Mounting a partition of the guest on the host ===<br />
It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.<br />
<br />
The procedure to mount the drive on the host depends on the type of qemu image, ''raw'' or ''qcow2''. We detail thereafter the steps to mount a drive in the two formats in [[#Mounting a partition from a raw image]] and [[#Mounting a partition from a qcow2 image]]. For the full docummentation see [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host QEMU/Mounting an image on the host].<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== Mounting a partition from a raw image ====<br />
<br />
It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.<br />
<br />
===== With manually specifying byte offset =====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
===== With loop module autodetecting partitions =====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
===== With kpartx =====<br />
<br />
'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
==== Mounting a partition from a qcow2 image ====<br />
<br />
We will use {{ic|qemu-nbd}}, which lets use the NBD (''network block device'') protocol to share the disk image.<br />
<br />
First, we need the ''nbd'' module loaded:<br />
<br />
# modprobe nbd max_part=16<br />
<br />
Then, we can share the disk and create the device entries:<br />
<br />
# qemu-nbd -c /dev/nbd0 ''/path/to/image.qcow2''<br />
<br />
Discover the partitions:<br />
<br />
# partprobe /dev/nbd0<br />
<br />
''fdisk'' can be used to get information regarding the different partitions in ''nbd0'':<br />
<br />
{{hc|# fdisk -l /dev/nbd0|2=<br />
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors<br />
Units: sectors of 1 * 512 = 512 bytes<br />
Sector size (logical/physical): 512 bytes / 512 bytes<br />
I/O size (minimum/optimal): 512 bytes / 512 bytes<br />
Disklabel type: dos<br />
Disk identifier: 0xa6a4d542<br />
<br />
Device Boot Start End Sectors Size Id Type<br />
/dev/nbd0p1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT<br />
/dev/nbd0p2 1026048 52877311 51851264 24.7G 7 HPFS/NTFS/exFAT}}<br />
<br />
Then mount any partition of the drive image, for example the partition 2:<br />
<br />
# mount /dev/nbd0'''p2''' ''mountpoint''<br />
<br />
After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:<br />
<br />
# umount ''mountpoint''<br />
# qemu-nbd -d /dev/nbd0<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initrd]] manually, or by simulating a disk with an MBR by using the [https://www.kernel.org/doc/Documentation/device-mapper/ Device-mapper], linear [[RAID]], or a [https://www.kernel.org/doc/Documentation/blockdev/nbd.txt Linux Network Block Device].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulate a virtual disk with MBR ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
For the following, suppose you have a plain, unmounted {{ic|/dev/hda''N''}} partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes {{ic|/dev/hda''N''}} to the virtual machine.<br />
<br />
A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk [https://www.virtualbox.org/manual/ch09.html#rawdisk created by]<br />
<br />
$ VBoxManage internalcommands createrawvmdk -filename ''/path/to/file.vmdk'' -rawdisk /dev/hda<br />
<br />
will be rejected by QEMU with the error message<br />
<br />
Unsupported image type 'partitionedDevice'<br />
<br />
Note that {{ic|VBoxManage}} creates two files, {{ic|''file.vmdk''}} and {{ic|''file-pt.vmdk''}}, the latter being a copy of the MBR, to which the text file {{ic|file.vmdk}} points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.<br />
<br />
===== Device Mapper =====<br />
<br />
A method that is similar to the use of a VMDK descriptor file uses the device mapper to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=2048<br />
<br />
Here, a 1 MB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:<br />
<br />
# losetup --show -f ''/path/to/mbr''<br />
/dev/loop0<br />
<br />
In this example, the resulting device is {{ic|/dev/loop0}}. The device mapper is now used to join the MBR and the partition:<br />
<br />
# echo "0 2048 linear /dev/loop0 0<br />
2048 `blockdev --getsz /dev/hda''N''` linear /dev/hda''N'' 0" | dmsetup create qemu<br />
<br />
The resulting {{ic|/dev/mapper/qemu}} is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in {{ic|''/path/to/mbr''}}).<br />
<br />
The following setup is an example where the position of {{ic|/dev/hda''N''}} on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:<br />
<br />
# dd if=/dev/hda count=1 of=''/path/to/mbr''<br />
# loop=`losetup --show -f ''/path/to/mbr''`<br />
# start=`blockdev --report /dev/hda''N'' | tail -1 | awk '{print $5}'`<br />
# size=`blockdev --getsz /dev/hda''N''`<br />
# disksize=`blockdev --getsz /dev/hda`<br />
# echo "0 1 linear $loop 0<br />
1 $((start-1)) zero<br />
$start $size linear /dev/hda''N'' 0<br />
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu<br />
<br />
The table provided as standard input to {{ic|dmsetup}} has a similar format as the table in a VDMK descriptor file produced by {{ic|VBoxManage}} and can alternatively be loaded from a file with {{ic|dmsetup create qemu --table ''table_file''}}. To the virtual machine, only {{ic|/dev/hda''N''}} is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for {{ic|/dev/mapper/qemu}} with {{ic|dmsetup table qemu}} (use {{ic|udevadm info -rq name /sys/dev/block/''major'':''minor''}} to translate {{ic|''major'':''minor''}} to the corresponding {{ic|/dev/''blockdevice''}} name). Use {{ic|dmsetup remove qemu}} and {{ic|losetup -d $loop}} to delete the created devices.<br />
<br />
A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from {{ic|/dev/hda''N''}} instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.<br />
<br />
===== Linear RAID =====<br />
<br />
You can also do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: <br />
<br />
First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hda''N''}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Network Block Device =====<br />
<br />
Instead of the methods decribed above, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
<ol><br />
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
</li><br />
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
</li><br />
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|<nowiki><br />
#!/usr/bin/env python<br />
<br />
import sys<br />
import zlib<br />
<br />
if len(sys.argv) != 2:<br />
print("usage: %s <VM Name>" % sys.argv[0])<br />
sys.exit(1)<br />
<br />
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff<br />
crc = str(hex(crc))[2:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
</nowiki>}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
</li><br />
</ol><br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity. To make ping work in the guest see [[Sysctl#Allow unprivileged users to create IPPROTO_ICMP sockets]].}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
<br />
{{bc|1=<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254<br />
}}<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|See [[Network bridge]] for information on creating bridge.}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''bridge0''<br />
allow ''bridge1''<br />
...}}<br />
<br />
Now start the VM. The most basic usage would be:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''<br />
<br />
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
# sysctl net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
USERID=$(whoami)<br />
<br />
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079<br />
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
sudo /usr/bin/ip tuntap add user $USERID mode tap<br />
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
<br />
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.<br />
# macaddr='52:54:be:36:42:a9'<br />
<br />
qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo ip link set dev $IFACE down &> /dev/null<br />
sudo ip tuntap del $IFACE mode tap &> /dev/null<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module loading with systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be executable<br />
<br />
# chmod u+x /etc/systemd/scripts/qemu-network-env<br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
====Alternative method====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|<nowiki><br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
</nowiki>}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|<nowiki><br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you are using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net,netdev=network0<br />
<br />
...become:<br />
<br />
-nic tap,ifname=tap0,script=no,downscript=no,vhost=on,model=virtio-net<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|<nowiki>model=...</nowiki>}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|<nowiki>model=...</nowiki>}}) are related with the device. The same parameters (for example, {{ic|<nowiki>smb=...</nowiki>}}) are used. There is also a special parameter for {{ic|-nic}} which completely disables the default (user-mode) networking:<br />
<br />
-nic none<br />
<br />
See [https://qemu.weilnetz.de/doc/qemu-doc.html#Network-options QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphic card ==<br />
QEMU can emulate a standard graphic card text mode using {{ic|-curses}} command line option. This allows to type text and see text output directly inside a text terminal.<br />
<br />
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor support|increase vga_memmb]].<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|$ dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
== SPICE ==<br />
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.<br />
=== Enabling SPICE support on the host ===<br />
The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device<br />
# {{ic|1=-spice port=5930,disable-ticketing}} set TCP port {{ic|5930}} for spice channels listening and allow client to connect without authentication<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.''<br />
{{Tip|Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system, so it is [https://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports reportedly] better for performance. Example:<br />
{{bc|1=$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing}}<br />
}}<br />
<br />
=== Connecting to the guest with a SPICE client ===<br />
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:<br />
<br />
{{App|virt-viewer|SPICE client recommended by the protocol developers, a subset of the virt-manager project.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}<br />
<br />
{{App|spice-gtk|SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.|https://www.spice-space.org/|{{Pkg|spice-gtk}}}}<br />
<br />
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
==== Manually running a SPICE client ====<br />
One way of connecting to the guest is to manually run the SPICE client using {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}, depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.<br />
<br />
{{Tip|To connect to the guest through SSH tunelling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}<br />
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.<br />
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.<br />
}}<br />
<br />
==== Running a SPICE client with QEMU ====<br />
QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the {{ic|-display spice-app}} parameter. This will use the system's default SPICE client as the viewer, determined by your [[XDG_MIME_Applications#mimeapps.list|mimeapps.list]] files.<br />
<br />
=== Enabling SPICE support on the guest ===<br />
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. [[Enable]] {{ic|spice-vdagentd.service}} after installation.<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
For guests under '''other operating systems''', refer to the ''Guest'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
=== Password authentication with SPICE ===<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
=== TLS encrypted communication with SPICE ===<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
== VNC ==<br />
<br />
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
=== Basic password authentication ===<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Host ===<br />
<br />
The audio driver used by QEMU is set with the {{ic|QEMU_AUDIO_DRV}} environment variable:<br />
<br />
$ export QEMU_AUDIO_DRV=pa<br />
<br />
Run the following command to get QEMU's configuration options related to PulseAudio:<br />
<br />
$ qemu-system-x86_64 -audio-help | awk '/Name: pa/' RS=<br />
<br />
The listed options can be exported as environment variables, for example:<br />
<br />
{{bc|1=<br />
$ export QEMU_PA_SINK=alsa_output.pci-0000_04_01.0.analog-stereo.monitor<br />
$ export QEMU_PA_SOURCE=input<br />
}}<br />
<br />
=== Guest ===<br />
To get list of the supported emulation audio drivers:<br />
$ qemu-system-x86_64 -soundhw help<br />
<br />
To use e.g. {{ic|hda}} driver for the guest use the {{ic|-soundhw hda}} command with QEMU.<br />
<br />
{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -net nic,model=virtio<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an (Arch) Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.<br />
<br />
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \<br />
-drive file=''/path/to/installer.iso'',index=2,media=cdrom \<br />
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option {{ic|Load Drivers}}.<br />
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".<br />
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.<br />
* Click Next<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change Existing Windows VM to use virtio =====<br />
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.<br />
<br />
You can download the virtio disk driver from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
Now you need to create a new disk image, which will force Windows to search for the driver. For example:<br />
<br />
$ qemu-img create -f qcow2 ''fake.qcow2'' 1G<br />
<br />
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso<br />
<br />
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1). Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio<br />
<br />
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and do not forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still will not be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|<nowiki><br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
</nowiki>}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
{{bc|<nowiki><br />
sed -ibak "s/ada/vtbd/g" /etc/fstab<br />
</nowiki>}}<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports. Alternative options of accessing the monitor are described below:<br />
<br />
* [[telnet]]: Run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
$ telnet 127.0.0.1 ''port''<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
* UNIX socket: Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
* TCP: You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
* Standard I/O: It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit all<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== QEMU machine protocol ==<br />
<br />
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].<br />
<br />
=== Start QMP ===<br />
<br />
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait<br />
<br />
Then one way to communicate with the QMP agent is to use [[netcat]]:<br />
<br />
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}<br />
<br />
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:<br />
<br />
{"execute": "qmp_capabilities"}<br />
<br />
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:<br />
<br />
{"execute": "query-commands"}<br />
<br />
=== Live merging of child image into parent image ===<br />
<br />
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:<br />
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}<br />
<br />
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.<br />
<br />
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:<br />
<br />
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}<br />
<br />
Until such a command is issued, the ''commit'' operation remains active.<br />
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.<br />
<br />
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}<br />
<br />
=== Live creation of a new snapshot ===<br />
To create a new snapshot out of a running image, run the command:<br />
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}<br />
<br />
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.<br />
<br />
== Tips and tricks ==<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.<br />
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple cores, assign the guest more cores using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* Apply [[#Enabling KVM]]: add {{ic|-enable-kvm}} to the QEMU start command you use.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio<br />
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''<br />
* Use the native Linux AIO:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:<br />
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0<br />
<br />
See http://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== With systemd service ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|2=<br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
ExecStart=/usr/bin/qemu-${type} -name %i -nographic $args<br />
ExecStop=/bin/sh -c ${haltcmd}<br />
TimeoutStopSec=30<br />
KillMode=none<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
{{Note|According to {{man|5|systemd.service}} and {{ic|5|systemd.kill}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your guest system will not be able to shutdown correctly.<br />
}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|type}}, {{ic|args}} and {{ic|haltcmd}} set. Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|<nowiki><br />
type="system-x86_64"<br />
<br />
args="-enable-kvm -m 512 -hda /dev/vg0/vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shut down your VM correctly<br />
#haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
</nowiki>}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|<nowiki><br />
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7101"<br />
</nowiki>}}<br />
<br />
The description of the variables is the following:<br />
* {{ic|type}} - QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM.<br />
* {{ic|args}} - QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.<br />
* {{ic|haltcmd}} - Command to shut down a VM safely. In this example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. You can use SSH or some other ways as well.<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the {{ic|lsusb}} command.<br />
For example:<br />
{{hc|lsusb|<br />
...<br />
Bus '''003''' Device '''007''': ID '''0781''':'''5406''' SanDisk Corp. Cruzer Micro U3<br />
}}<br />
<br />
The outputs in bold above will be useful to identify respectively the ''host_bus'' and ''host_addr'' or the ''vendor_id'' and ''product_id''.<br />
<br />
In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device qemu-xhci,id=xhci}} respectively and then attach the physical device to it with the option {{ic|1=-device usb-host,..}}. We will consider that ''controller_id'' is either {{ic|ehci}} or {{ic|xhci}} for the rest of this section.<br />
<br />
Then, there are two ways to connect to the USB of the host with qemu:<br />
# Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is: {{bc|1=-device usb-host,bus=''controller_id''.0,vendorid=0x''vendor_id'',productid=0x''product_id''}}Applied to the device used in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x'''0781''',productid=0x'''5406'''}}One can also add the {{ic|1=...,port=''port_number''}} setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one want to add multiple usb devices to the VM.<br />
# Attach whatever is connected to a given USB bus and address, the syntax is:{{bc|1=-device usb-host,bus=''controller_id''.0,hostbus=''host_bus'',host_addr=''host_addr''}}Applied to the bus and the address in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus='''3''',hostaddr='''7'''}}<br />
<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|1=-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3}}<br />
See [https://www.spice-space.org/usbredir.html SPICE/usbredir] for more information.<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#Temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}<br />
<br />
=== Multi-monitor support ===<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Copy and paste ===<br />
<br />
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.<br />
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 10.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
{{Note|An administrator account is required to change power settings.}}<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
== Troubleshooting ==<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assign it a dynamic id to group kvm (see [https://bugs.archlinux.org/task/54943 bug]). A workground for avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line:<br />
<br />
group = "78"<br />
<br />
to<br />
<br />
group = "kvm"<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows VM ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the VM may crash unexpectedly, whereas they would run normally on a physical machine. If, while running {{ic|dmesg -wH}}, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
=== Applications in the VM experience long delays or take a long time to start ===<br />
<br />
This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
=== High interrupt latency and microstuttering ===<br />
<br />
This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.<br />
<br />
* One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores. <br />
* Another possible cause is PS/2 inputs. Switch from PS/2 to Virtio inputs, see [[PCI passthrough via OVMF#Passing keyboard/mouse via Evdev]].<br />
<br />
=== QXL video causes low resolution ===<br />
<br />
QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [https://bugs.launchpad.net/qemu/+bug/1843151] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.<br />
<br />
As a workaround, create your device in this form:<br />
<br />
-device qxl-vga,max_outputs=1...<br />
<br />
=== Hang during VM initramfs ===<br />
<br />
Linux 5.2.11 introduced a KVM regression where under some circumstances a VM may permanently hang during the early boot phase, when the initramfs is being loaded or ran. [https://www.spinics.net/lists/kvm/msg195171.html] Linux 5.3 fixed the regression. The host shows qemu using 100% CPU * number of virtual CPUs. Reported case is with a host using hyperthreading, and a VM being given more than host's {{ic|nproc}}/2 virtual CPUs. It is unknown what exact circumstances trigger one of the threads to delete a memory region to cause this. The workarounds are:<br />
<br />
* Upgrade to Linux 5.3.<br />
* Downgrade to Linux 5.2.10<br />
* Until fixed, try giving the VM no more than the host's {{ic|nproc}}/2 virtual CPUs<br />
* Custom compile linux, reverting commit 2ad350fb4c (note this re-introduces a regression triggered when removing a memslot)<br />
<br />
== See also ==<br />
<br />
* [http://qemu.org Official QEMU website]<br />
* [http://www.linux-kvm.org Official KVM website]<br />
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]<br />
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [http://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/part.virt.qemu.html Managing Virtual Machines with QEMU - OpenSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>AndrewXanadu