QEMU: Difference between revisions

From ArchWiki
m (→‎Network drivers: iso version updated to be consistent)
(update http to https)
 
(253 intermediate revisions by 65 users not shown)
Line 3: Line 3:
[[de:QEMU]]
[[de:QEMU]]
[[es:QEMU]]
[[es:QEMU]]
[[fr:Qemu]]
[[fr:QEMU]]
[[ja:QEMU]]
[[ja:QEMU]]
[[zh-hans:QEMU]]
[[zh-hans:QEMU]]
[[zh-hant:QEMU]]
{{Related articles start}}
{{Related articles start}}
{{Related|:Category:Hypervisors}}
{{Related|:Category:Hypervisors}}
Line 14: Line 13:
{{Related articles end}}
{{Related articles end}}


According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."
According to the [https://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."


When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.
Line 22: Line 21:
== Installation ==
== Installation ==


[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:
[[Install]] the {{Pkg|qemu-full}} package (or {{Pkg|qemu-base}} for the version without GUI and {{Pkg|qemu-desktop}} for the version with only x86 emulation by default) and below optional packages for your needs:


* {{Pkg|qemu-arch-extra}} - extra architectures support
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support
* {{Pkg|qemu-block-rbd}} - RBD block support
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support


Alternatively, {{AUR|qemu-user-static}} exists as a usermode and static variant.
Alternatively, {{Pkg|qemu-user-static}} exists as a usermode and static variant.


=== QEMU variants ===
=== QEMU variants ===
Line 40: Line 37:
; Full-system emulation
; Full-system emulation
: In this mode, QEMU emulates a full system, including one or several processors and various peripherals. It is more accurate but slower, and does not require the emulated OS to be Linux.
: In this mode, QEMU emulates a full system, including one or several processors and various peripherals. It is more accurate but slower, and does not require the emulated OS to be Linux.
: QEMU commands for full-system emulation are named {{ic|qemu-system-''target_architecture''}}, e.g. {{ic|qemu-system-x86_64}} for emulating intel 64-bit CPUs, {{ic|qemu-system-i386}} for intel 32 bits CPUs, {{ic|qemu-system-arm}} for ARM (32 bits), {{ic|qemu-system-aarch64}} for ARM64, etc.
: QEMU commands for full-system emulation are named {{ic|qemu-system-''target_architecture''}}, e.g. {{ic|qemu-system-x86_64}} for emulating [[Wikipedia:x86_64|x86_64]] CPUs, {{ic|qemu-system-i386}} for Intel [[Wikipedia:i386|32-bit x86]] CPUs, {{ic|qemu-system-arm}} for [[Wikipedia:ARM architecture family#32-bit architecture|ARM (32 bits)]], {{ic|qemu-system-aarch64}} for [[Wikipedia:AArch64|ARM64]], etc.
: If the target architecture matches the host CPU, this mode may still benefit from a significant speedup by using a hypervisor like [[#Enabling KVM|KVM]] or Xen.
: If the target architecture matches the host CPU, this mode may still benefit from a significant speedup by using a hypervisor like [[#Enabling KVM|KVM]] or Xen.
; [https://www.qemu.org/docs/master/user/main.html Usermode emulation]: In this mode, QEMU is able to invoke a Linux executable compiled for a (potentially) different architecture by leveraging the host system resources. There may be compatibility issues, e.g. some features may not be implemented, dynamically linked executables will not work out of the box (see [[#Chrooting into arm/arm64 environment from x86_64]] to address this) and only Linux is supported (although [https://wiki.winehq.org/Emulation Wine may be used] for running Windows executables).
; [https://www.qemu.org/docs/master/user/main.html Usermode emulation]
: QEMU commands for usermode emulation are named {{ic|qemu-''target_architecture''}}, e.g. {{ic|qemu-x86_64}} for emulating intel 64-bit CPUs.
: In this mode, QEMU is able to invoke a Linux executable compiled for a (potentially) different architecture by leveraging the host system resources. There may be compatibility issues, e.g. some features may not be implemented, dynamically linked executables will not work out of the box (see [[#Chrooting into arm/arm64 environment from x86_64]] to address this) and only Linux is supported (although [https://wiki.winehq.org/Emulation Wine may be used] for running Windows executables).
: QEMU commands for usermode emulation are named {{ic|qemu-''target_architecture''}}, e.g. {{ic|qemu-x86_64}} for emulating 64-bit CPUs.


QEMU is offered in dynamically-linked and statically-linked variants:
QEMU is offered in dynamically-linked and statically-linked variants:
Line 57: Line 55:
Note that headless and non-headless versions install commands with the same name (e.g. {{ic|qemu-system-x86_64}}) and thus cannot be both installed at the same time.
Note that headless and non-headless versions install commands with the same name (e.g. {{ic|qemu-system-x86_64}}) and thus cannot be both installed at the same time.


=== Details on packages offered in Arch Linux ===
=== Details on packages available in Arch Linux ===


* The {{Pkg|qemu}} package provides the {{ic|x86_64}} architecture emulators for full-system emulation ({{ic|qemu-system-x86_64}}). The {{Pkg|qemu-arch-extra}} package provides the {{ic|x86_64}} usermode variant ({{ic|qemu-x86_64}}) and also for the rest of supported architectures it includes both full-system and usermode variants (e.g. {{ic|qemu-system-arm}} and {{ic|qemu-arm}}).
* The {{Pkg|qemu-desktop}} package provides the {{ic|x86_64}} architecture emulators for full-system emulation ({{ic|qemu-system-x86_64}}). The {{Pkg|qemu-emulators-full}} package provides the {{ic|x86_64}} usermode variant ({{ic|qemu-x86_64}}) and also for the rest of supported architectures it includes both full-system and usermode variants (e.g. {{ic|qemu-system-arm}} and {{ic|qemu-arm}}).
* The headless versions of these packages (only applicable to full-system emulation) are {{Pkg|qemu-headless}} ({{ic|x86_64}}-only) and {{Pkg|qemu-headless-arch-extra}} (rest of architectures).
* The headless versions of these packages (only applicable to full-system emulation) are {{Pkg|qemu-base}} ({{ic|x86_64}}-only) and {{Pkg|qemu-emulators-full}} (rest of architectures).
* Full-system emulation can be expanded with some QEMU modules present in separate packages: {{Pkg|qemu-block-gluster}}, {{Pkg|qemu-block-iscsi}}, {{Pkg|qemu-block-rbd}} and {{Pkg|qemu-guest-agent}}.
* Full-system emulation can be expanded with some QEMU modules present in separate packages: {{Pkg|qemu-block-gluster}}, {{Pkg|qemu-block-iscsi}} and {{Pkg|qemu-guest-agent}}.
* The unofficial AUR package {{AUR|qemu-user-static}} provides a usermode and static variant for all target architectures supported by QEMU. A precompiled version of this package exists: {{AUR|qemu-user-static-bin}}. The installed QEMU commands are named {{ic|qemu-''target_architecture''-static}}, for example, {{ic|qemu-x86_64-static}} for intel 64-bit CPUs.
* {{Pkg|qemu-user-static}} provides a usermode and static variant for all target architectures supported by QEMU. The installed QEMU commands are named {{ic|qemu-''target_architecture''-static}}, for example, {{ic|qemu-x86_64-static}} for intel 64-bit CPUs.


{{Note|At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.}}
{{Note|At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.}}
Line 71: Line 69:


[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.
Other GUI front-ends for QEMU:
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}


== Creating new virtualized system ==
== Creating new virtualized system ==
Line 81: Line 75:


{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is  
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is  
explicitly told to preallocate. See man qemu-img in section Notes.}}  
explicitly told to preallocate. See {{man|1|qemu-img|NOTES}}.}}  


{{Tip|See [[Wikibooks:QEMU/Images]] for more information on QEMU images.}}
{{Tip|See [[Wikibooks:QEMU/Images]] for more information on QEMU images.}}
Line 91: Line 85:
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.


QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GiB image in the ''raw'' format:


  $ qemu-img create -f raw ''image_file'' 4G
  $ qemu-img create -f raw ''image_file'' 4G
Line 99: Line 93:
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}


{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images. Can be specified in option nocow for qcow2 format when creating image: {{bc|1=$ qemu-img create -f qcow2 ''image_file'' -o nocow=on 4G}}}}


==== Overlay storage images ====
==== Overlay storage images ====
Line 109: Line 103:
  $ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''
  $ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''


After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):
After that you can run your QEMU virtual machine as usual (see [[#Running virtualized system]]):


  $ qemu-system-x86_64 ''img1.cow''
  $ qemu-system-x86_64 ''img1.cow''
Line 131: Line 125:
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}


The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GiB, run:


  $ qemu-img resize ''disk_image'' +10G
  $ qemu-img resize ''disk_image'' +10G


After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss! For a Windows guest, open the "create and format hard disk partitions" control panel.
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space.  
 
===== Shrinking an image =====
 
When shrinking a disk image, you must first reduce the allocated file systems and partition sizes using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly. For a Windows guest, this can be performed from the "create and format hard disk partitions" control panel.
 
{{Warning|Proceeding to shrink the disk image without reducing the guest partition sizes will result in data loss.}}
 
Then, to decrease image space by 10 GiB, run:
 
$ qemu-img resize --shrink ''disk_image'' -10G


==== Converting an image ====
==== Converting an image ====
Line 147: Line 151:
=== Preparing the installation media ===
=== Preparing the installation media ===


To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.
To install an operating system into your disk image, you need the installation medium (e.g. optical disc, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.


{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}
{{Tip|If using an optical disc, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}


=== Installing the operating system ===
=== Installing the operating system ===
Line 163: Line 167:
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).


{{Note|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}
{{Note|By default only 128 MiB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}


{{Tip|
{{Tip|
Line 178: Line 182:


Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.
Usually, if an option has many possible values, you can use
$ qemu-system-x86_64 ''option'' ''help''
to list all possible values. If it supports properties, you can use
$ qemu-system-x86_64 ''option'' ''value,help''
to list all available properties.
For example:
$ qemu-system-x86_64 -machine help
$ qemu-system-x86_64 -machine q35,help
$ qemu-system-x86_64 -device help
$ qemu-system-x86_64 -device qxl,help
You can use these methods and the {{man|1|qemu}} documentation to understand the options used in follow sections.


By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.
Line 187: Line 209:
KVM (''Kernel-based Virtual Machine'') full virtualization must be supported by your Linux kernel and your hardware, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.
KVM (''Kernel-based Virtual Machine'') full virtualization must be supported by your Linux kernel and your hardware, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.


To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the [[#QEMU monitor]] and type {{ic|info kvm}}.
To start QEMU in KVM mode, append {{ic|-accel kvm}} to the additional start options. To check if KVM is enabled for a running virtual machine, enter the [[#QEMU monitor]] and type {{ic|info kvm}}.


{{Note|
{{Note|
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} or the {{ic|-accel kvm}} option.
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} or the {{ic|-accel kvm}} option.
* CPU model {{ic|host}} requires KVM
* CPU model {{ic|host}} requires KVM.
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
* If you start your virtual machine with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.
* KVM needs to be enabled in order to start Windows 7 or Windows 8 properly without a ''blue screen''.
}}
}}


Line 205: Line 227:


{{Note|
{{Note|
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI passthrough is required.
}}
}}


== Sharing data between host and guest ==
=== Booting in UEFI mode ===


=== Network ===
The default firmware used by QEMU is [https://www.coreboot.org/SeaBIOS SeaBIOS], which is a Legacy BIOS implementation. QEMU uses {{ic|/usr/share/qemu/bios-256k.bin}} (provided by the {{Pkg|seabios}} package) as a default read-only (ROM) image. You can use the {{ic|-bios}} argument to select another firmware file. However, UEFI requires writable memory to work properly, so you need to emulate [https://wiki.qemu.org/Features/PC_System_Flash PC System Flash] instead.


Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network block device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.
[https://github.com/tianocore/tianocore.github.io/wiki/OVMF OVMF] is a TianoCore project to enable UEFI support for Virtual Machines. It can be [[install]]ed with the {{Pkg|edk2-ovmf}} package.


The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.
There are two ways to use OVMF as a firmware. The first is to copy {{ic|/usr/share/edk2/x64/OVMF.4m.fd}}, make it writable and use as a pflash drive:
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).


=== QEMU's port forwarding ===
-drive if=pflash,format=raw,file=''/copy/of/OVMF.4m.fd''


QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to an SSH server running on the guest.
All changes to the UEFI settings will be saved directly to this file.


For example, to bind port 10022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:
Another and more preferable way is to split OVMF into two files. The first one will be read-only and store the firmware executable, and the second one will be used as a writable variable store. The advantage is that you can use the firmware file directly without copying, so it will be updated automatically by [[pacman]].


$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::10022-:22
Use {{ic|/usr/share/edk2/x64/OVMF_CODE.4m.fd}} as a first read-only pflash drive. Copy {{ic|/usr/share/edk2/x64/OVMF_VARS.4m.fd}}, make it writable and use as a second writable pflash drive:


Make sure the sshd is running on the guest and connect with:
-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2/x64/OVMF_CODE.4m.fd \
-drive if=pflash,format=raw,file=''/copy/of/OVMF_VARS.4m.fd''


$ ssh ''guest-user''@localhost -p 10022
If secure boot is wanted, use q35 machine type and replace {{ic|/usr/share/edk2/x64/OVMF_CODE.4m.fd}} with {{ic|/usr/share/edk2/x64/OVMF_CODE.secboot.4m.fd}}.


You can use [[SSHFS]] to mount the guest's file system at the host for shared read and write access.
=== Trusted Platform Module emulation ===


To forward several ports, you just repeat the {{ic|hostfwd}} in the {{ic|-nic}} argument, e.g. for VNC's port:
QEMU can emulate [[Trusted Platform Module]], which is required by some systems such as Windows 11 (which requires TPM 2.0).


$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::10022-:22,hostfwd=tcp::5900-:5900
[[Install]] the {{Pkg|swtpm}} package, which provides a software TPM implementation. Create some directory for storing TPM data ({{ic|''/path/to/mytpm''}} will be used as an example). Run this command to start the emulator:


=== QEMU's built-in SMB server ===
$ swtpm socket --tpm2 --tpmstate dir=''/path/to/mytpm'' --ctrl type=unixio,path=''/path/to/mytpm/swtpm-sock''


QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] on the host with an automatically generated {{ic|smb.conf}} file located in {{ic|/tmp/qemu-smb.''random_string''}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal [[Samba]] service on the host, which the guest can also access if you have set up shares on it.
{{ic|''/path/to/mytpm/swtpm-sock''}} will be created by ''swtpm'': this is a UNIX socket to which QEMU will connect. You can put it in any directory.


Only a single directory can be set as shared with the option {{ic|1=smb=}}, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.
By default, ''swtpm'' starts a TPM version 1.2 emulator. The {{ic|--tpm2}} option enables TPM 2.0 emulation.


''Samba'' must be installed on the host. To enable this feature, start QEMU with a command like:
Finally, add the following options to QEMU:


  $ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''
  -chardev socket,id=chrtpm,path=''/path/to/mytpm/swtpm-sock'' \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0


where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.
and TPM will be available inside the virtual machine. After shutting down the virtual machine, ''swtpm'' will be automatically terminated.


Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.
See [https://qemu-project.gitlab.io/qemu/specs/tpm.html the QEMU documentation] for more information.  


{{Note|
If guest OS still doesn't recognize the TPM device, try to adjust ''CPU Models and Topology'' options. It might cause problem.
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [https://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].
}}


One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:
== Sharing data between host and guest ==


#!/bin/bash
=== Network ===
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')
echo "[global]
allow insecure wide links = yes
[qemu]
follow symlinks = yes
wide links = yes
acl allow execute always = yes" >> $conf
# in case the change is not detected automatically:
smbcontrol --configfile=$conf $pid reload-config


This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network block device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.


echo "[''myshare'']
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.
path=''another_path''
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).
read only=no
guest ok=yes
force user=''username''" >> $conf


This share will be available on the guest as {{ic|\\10.0.2.4\''myshare''}}.
=== QEMU's port forwarding ===


=== Using filesystem passthrough and VirtFS ===
{{Note|QEMU's port forwarding is IPv4-only. IPv6 port forwarding is not implemented and the last patches were proposed in 2018.[https://lore.kernel.org/qemu-devel/1540512223-21199-1-git-send-email-max7255@yandex-team.ru/T/#u]}}


See the [https://wiki.qemu.org/Documentation/9psetup QEMU documentation].
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to an SSH server running on the guest.


=== Mounting a partition of the guest on the host ===
For example, to bind port 60022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:


It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22


The procedure to mount the drive on the host depends on the type of qemu image, ''raw'' or ''qcow2''. We detail thereafter the steps to mount a drive in the two formats in [[#Mounting a partition from a raw image]] and [[#Mounting a partition from a qcow2 image]]. For the full documentation see [[Wikibooks:QEMU/Images#Mounting an image on the host]].
Make sure the sshd is running on the guest and connect with:


{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}
$ ssh ''guest-user''@127.0.0.1 -p 60022


==== Mounting a partition from a raw image ====
You can use [[SSHFS]] to mount the guest's file system at the host for shared read and write access.


It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.
To forward several ports, you just repeat the {{ic|hostfwd}} in the {{ic|-nic}} argument, e.g. for VNC's port:


===== With manually specifying byte offset =====
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22,hostfwd=tcp::5900-:5900


One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:
=== QEMU's built-in SMB server ===


# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] on the host with an automatically generated {{ic|smb.conf}} file located in {{ic|/tmp/qemu-smb.''random_string''}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal [[Samba]] service on the host, which the guest can also access if you have set up shares on it.


The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.
Only a single directory can be set as shared with the option {{ic|1=smb=}}, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.


Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.
''Samba'' must be installed on the host. To enable this feature, start QEMU with a command like:


===== With loop module autodetecting partitions =====
$ qemu-system-x86_64 -nic user,id=nic0,smb=''shared_dir_path'' ''disk_image''


The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.


* Get rid of all your loopback devices (unmount all mounted images, etc.).
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.


{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}
{{Note|
 
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.
Set up your image as a loopback device:
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled]{{Dead link|2023|05|06|status=domain name not resolved}} and that a firewall does not block [https://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].
* If you use [[#Tap networking with QEMU]], use {{ic|1=-device virtio-net,netdev=vmnic -netdev user,id=vmnic,smb=''shared_dir_path''}} to get SMB.
}}


# losetup -f -P ''disk_image''
One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:


Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:
#!/bin/sh
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')
echo "[global]
allow insecure wide links = yes
[qemu]
follow symlinks = yes
wide links = yes
acl allow execute always = yes" >> "$conf"
# in case the change is not detected automatically:
smbcontrol --configfile="$conf" "$pid" reload-config


# mount /dev/loop0p1 ''mountpoint''
This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:


To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].
echo "[''myshare'']
path=''another_path''
read only=no
guest ok=yes
force user=''username''" >> $conf


===== With kpartx =====
This share will be available on the guest as {{ic|\\10.0.2.4\''myshare''}}.


'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:
=== Using filesystem passthrough and VirtFS ===


# kpartx -a ''disk_image''
See the [https://wiki.qemu.org/Documentation/9psetup QEMU documentation].


This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.
=== Host file sharing with virtiofsd ===


==== Mounting a partition from a qcow2 image ====
{{Style|See [[Help:Style/Formatting and punctuation]].}}


We will use {{ic|qemu-nbd}}, which lets use the NBD (''network block device'') protocol to share the disk image.
virtiofsd is shipped with QEMU package. Documentation is available [https://qemu-stsquad.readthedocs.io/en/docs-next/tools/virtiofsd.html online]{{Dead link|2023|05|06|status=404}} or {{ic|/usr/share/doc/qemu/qemu/tools/virtiofsd.html}} on local file system with {{Pkg|qemu-docs}} installed.
 
Add user that runs qemu to the 'kvm' [[user group]], because it needs to access the virtiofsd socket. You might have to logout for change to take effect.
 
{{Accuracy|Running services as root is not secure. Also the process should be wrapped in a systemd service.}}


First, we need the ''nbd'' module loaded:
Start as virtiofsd as root:


  # modprobe nbd max_part=16
  # /usr/lib/virtiofsd --socket-path=/var/run/qemu-vm-001.sock --shared-dir /tmp/vm-001 --cache always


Then, we can share the disk and create the device entries:
where


# qemu-nbd -c /dev/nbd0 ''/path/to/image.qcow2''
* {{ic|/var/run/qemu-vm-001.sock}} is a socket file,
* {{ic|/tmp/vm-001}} is a shared directory between the host and the guest virtual machine.


Discover the partitions:
The created socket file has root only access permission. Give group kvm access to it with:


  # partprobe /dev/nbd0
  # chgrp kvm qemu-vm-001.sock; chmod g+rxw qemu-vm-001.sock


''fdisk'' can be used to get information regarding the different partitions in {{{ic|''nbd0''}}:
Add the following configuration options when starting the virtual machine:


{{hc|# fdisk -l /dev/nbd0|2=
-object memory-backend-memfd,id=mem,size=4G,share=on \
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors
-numa node,memdev=mem \
Units: sectors of 1 * 512 = 512 bytes
-chardev socket,id=char0,path=/var/run/qemu-vm-001.sock \
Sector size (logical/physical): 512 bytes / 512 bytes
-device vhost-user-fs-pci,chardev=char0,tag=myfs
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa6a4d542


Device      Boot  Start      End  Sectors  Size Id Type
where
/dev/nbd0p1 *      2048  1026047  1024000  500M  7 HPFS/NTFS/exFAT
/dev/nbd0p2      1026048 52877311 51851264 24.7G  7 HPFS/NTFS/exFAT}}


Then mount any partition of the drive image, for example the partition 2:
{{Expansion|Explain the remaining options (or remove them if they are not necessary).}}


# mount /dev/nbd0'''p2''' ''mountpoint''
* {{ic|1=size=4G}} shall match size specified with {{ic|-m 4G}} option,
* {{ic|/var/run/qemu-vm-001.sock}} points to socket file started earlier,


After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:
{{Style|The section should not be specific to Windows.}}


# umount ''mountpoint''
Remember, that guest must be configured to enable sharing. For Windows there are [https://virtio-fs.gitlab.io/howto-windows.html instructions]. Once configured, Windows will have the {{ic|Z:}} drive mapped automatically with shared directory content.
# qemu-nbd -d /dev/nbd0


=== Using any real partition as the single primary partition of a hard disk image ===
Your Windows 10 guest system is properly configured if it has:


Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.
* VirtioFSSService windows service,
* WinFsp.Launcher windows service,
* VirtIO FS Device driver under "System devices" in Windows "Device Manager".


In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you must either change the owner of the partition's device file to that user, add that user to the ''disk'' group, or use [[ACL]] for more fine-grained access control.
If the above installed and {{ic|Z:}} drive is still not listed, try repairing "Virtio-win-guest-tools" in Windows ''Add/Remove programs''.


{{Warning|
=== Mounting a partition of the guest on the host ===
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.
}}


After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.
It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.


However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initrd]] manually, or by simulating a disk with an MBR by using the [https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/index.html Device-mapper], linear [[RAID]], or a [https://www.kernel.org/doc/html/latest/admin-guide/blockdev/nbd.html Linux Network Block Device].
The procedure to mount the drive on the host depends on the type of qemu image, ''raw'' or ''qcow2''. We detail thereafter the steps to mount a drive in the two formats in [[#Mounting a partition from a raw image]] and [[#Mounting a partition from a qcow2 image]]. For the full documentation see [[Wikibooks:QEMU/Images#Mounting an image on the host]].


==== By specifying kernel and initrd manually ====
{{Warning|You must unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}


QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:
==== Mounting a partition from a raw image ====


{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}
It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.


$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3
===== With manually specifying byte offset =====


In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:


You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''


When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.


... -append 'root=/dev/sda1 console=ttyS0'
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.


==== Simulate a virtual disk with MBR ====
===== With loop module autodetecting partitions =====


A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a bootloader such as GRUB.
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:


For the following, suppose you have a plain, unmounted {{ic|/dev/hda''N''}} partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes {{ic|/dev/hda''N''}} to the virtual machine.
* Get rid of all your loopback devices (unmount all mounted images, etc.).
* [[Kernel modules#Manual module handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.


A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk [https://www.virtualbox.org/manual/ch09.html#rawdisk created by]
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}


$ VBoxManage internalcommands createrawvmdk -filename ''/path/to/file.vmdk'' -rawdisk /dev/hda
Set up your image as a loopback device:


will be rejected by QEMU with the error message
# losetup -f -P ''disk_image''
 
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0p''X''}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:
 
# mount /dev/loop0p1 ''mountpoint''


Unsupported image type 'partitionedDevice'
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].


Note that {{ic|VBoxManage}} creates two files, {{ic|''file.vmdk''}} and {{ic|''file-pt.vmdk''}}, the latter being a copy of the MBR, to which the text file {{ic|file.vmdk}} points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.
===== With kpartx =====


===== Device Mapper =====
''kpartx'' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:


A method that is similar to the use of a VMDK descriptor file uses the device mapper to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:
# kpartx -a ''disk_image''


$ dd if=/dev/zero of=''/path/to/mbr'' count=2048
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.


Here, a 1 MB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:
==== Mounting a partition from a qcow2 image ====


# losetup --show -f ''/path/to/mbr''
We will use {{ic|qemu-nbd}}, which lets use the NBD (''network block device'') protocol to share the disk image.
/dev/loop0


In this example, the resulting device is {{ic|/dev/loop0}}. The device mapper is now used to join the MBR and the partition:
First, we need the ''nbd'' module loaded:


  # echo "0 2048 linear /dev/loop0 0
  # modprobe nbd max_part=16
2048 `blockdev --getsz /dev/hda''N''` linear /dev/hda''N'' 0" | dmsetup create qemu


The resulting {{ic|/dev/mapper/qemu}} is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in {{ic|''/path/to/mbr''}}).
Then, we can share the disk and create the device entries:


The following setup is an example where the position of {{ic|/dev/hda''N''}} on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:
# qemu-nbd -c /dev/nbd0 ''/path/to/image.qcow2''


# dd if=/dev/hda count=1 of=''/path/to/mbr''
Discover the partitions:
# loop=`losetup --show -f ''/path/to/mbr''`
# start=`blockdev --report /dev/hda''N'' | tail -1 | awk '{print $5}'`
# size=`blockdev --getsz /dev/hda''N''`
# disksize=`blockdev --getsz /dev/hda`
# echo "0 1 linear $loop 0
1 $((start-1)) zero
$start $size linear /dev/hda''N'' 0
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu


The table provided as standard input to {{ic|dmsetup}} has a similar format as the table in a VDMK descriptor file produced by {{ic|VBoxManage}} and can alternatively be loaded from a file with {{ic|dmsetup create qemu --table ''table_file''}}. To the virtual machine, only {{ic|/dev/hda''N''}} is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for {{ic|/dev/mapper/qemu}} with {{ic|dmsetup table qemu}} (use {{ic|udevadm info -rq name /sys/dev/block/''major'':''minor''}} to translate {{ic|''major'':''minor''}} to the corresponding {{ic|/dev/''blockdevice''}} name). Use {{ic|dmsetup remove qemu}} and {{ic|losetup -d $loop}} to delete the created devices.
# partprobe /dev/nbd0


A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from {{ic|/dev/hda''N''}} instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.
''fdisk'' can be used to get information regarding the different partitions in {{ic|''nbd0''}}:


===== Linear RAID =====
{{hc|# fdisk -l /dev/nbd0|2=
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa6a4d542


You can also do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device:
Device      Boot  Start      End  Sectors  Size Id Type
/dev/nbd0p1 *      2048  1026047  1024000  500M  7 HPFS/NTFS/exFAT
/dev/nbd0p2      1026048 52877311 51851264 24.7G  7 HPFS/NTFS/exFAT}}


First, you create some small file to hold the MBR:
Then mount any partition of the drive image, for example the partition 2:


  $ dd if=/dev/zero of=''/path/to/mbr'' count=32
  # mount /dev/nbd0'''p2''' ''mountpoint''


Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:
After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:


  # losetup -f ''/path/to/mbr''
  # umount ''mountpoint''
# qemu-nbd -d /dev/nbd0


Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hda''N''}} disk image using software RAID:
=== Using any real partition as the single primary partition of a hard disk image ===


# modprobe linear
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''


The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you must either change the owner of the partition's device file to that user, add that user to the ''disk'' group, or use [[ACL]] for more fine-grained access control.


# fdisk /dev/md0
{{Warning|
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.
}}


Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.


Now, press {{ic|R}} to return to the main menu.
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a boot loader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by: [[#Specifying kernel and initrd manually]], [[#Simulating a virtual disk with MBR]], [[#Using the device-mapper]], [[#Using a linear RAID]] or [[#Using a Network Block Device]].


Press {{ic|P}} and check that the cylinder size is now 16k.
==== Specifying kernel and initrd manually ====


Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing boot loaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:


Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}


  $ qemu-system-x86_64 -hdc /dev/md0 ''[...]''
  $ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3


You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.


===== Network Block Device =====
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.


Instead of the methods decribed above, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:


Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:
... -append 'root=/dev/sda1 console=ttyS0'


#!/bin/sh
==== Simulating a virtual disk with MBR ====
dir="$(realpath "$(dirname "$0")")"
cat >wrapper.conf <<EOF
[generic]
allowlist = true
listenaddr = 127.713705
port = 10809
[wrap]
exportname = $dir/wrapper.img
multifile = true
EOF
nbd-server \
    -C wrapper.conf \
    -p wrapper.pid \
    "$@"


The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a boot loader such as GRUB.


qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''
For the following, suppose you have a plain, unmounted {{ic|/dev/hda''N''}} partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes {{ic|/dev/hda''N''}} to the virtual machine.


== Networking ==
A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk [https://www.virtualbox.org/manual/ch09.html#rawdisk created by]


{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}
$ VBoxManage internalcommands createrawvmdk -filename ''/path/to/file.vmdk'' -rawdisk /dev/hda


The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.
will be rejected by QEMU with the error message


In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.
Unsupported image type 'partitionedDevice'


=== Link-level address caveat ===
Note that {{ic|VBoxManage}} creates two files, {{ic|''file.vmdk''}} and {{ic|''file-pt.vmdk''}}, the latter being a copy of the MBR, to which the text file {{ic|file.vmdk}} points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.


By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.
===== Using the device-mapper =====


Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:
A method that is similar to the use of a VMDK descriptor file uses the [https://docs.kernel.org/admin-guide/device-mapper/index.html device-mapper] to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:


  $ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''
  $ dd if=/dev/zero of=''/path/to/mbr'' count=2048


Generating unique link-level addresses can be done in several ways:
Here, a 1 MiB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:


* Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
{{hc|# losetup --show -f ''/path/to/mbr''|/dev/loop0}}
* Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:


{{bc|1=
In this example, the resulting device is {{ic|/dev/loop0}}. The device mapper is now used to join the MBR and the partition:
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''
}}


* Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.
# echo "0 2048 linear /dev/loop0 0
2048 `blockdev --getsz /dev/hda''N''` linear /dev/hda''N'' 0" | dmsetup create qemu


{{hc|qemu-mac-hasher.py|2=
The resulting {{ic|/dev/mapper/qemu}} is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in {{ic|''/path/to/mbr''}}).
#!/usr/bin/env python
# usage: qemu-mac-hasher.py <VMName>


import sys
The following setup is an example where the position of {{ic|/dev/hda''N''}} on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:
import zlib


crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8"))))[-8:]
# dd if=/dev/hda count=1 of=''/path/to/mbr''
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))}}
# loop=`losetup --show -f ''/path/to/mbr''`
# start=`blockdev --report /dev/hda''N'' | tail -1 | awk '{print $5}'`
# size=`blockdev --getsz /dev/hda''N''`
# disksize=`blockdev --getsz /dev/hda`
# echo "0 1 linear $loop 0
1 $((start-1)) zero
$start $size linear /dev/hda''N'' 0
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu


In a script, you can use for example:
The table provided as standard input to {{ic|dmsetup}} has a similar format as the table in a VMDK descriptor file produced by {{ic|VBoxManage}} and can alternatively be loaded from a file with {{ic|dmsetup create qemu --table ''table_file''}}. To the virtual machine, only {{ic|/dev/hda''N''}} is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for {{ic|/dev/mapper/qemu}} with {{ic|dmsetup table qemu}} (use {{ic|udevadm info -rq name /sys/dev/block/''major'':''minor''}} to translate {{ic|''major'':''minor''}} to the corresponding {{ic|/dev/''blockdevice''}} name). Use {{ic|dmsetup remove qemu}} and {{ic|losetup -d $loop}} to delete the created devices.


vm_name="''VM Name''"
A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from {{ic|/dev/hda''N''}} instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''


=== User-mode networking ===
===== Using a linear RAID =====


By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.
{{Out of date|[https://github.com/torvalds/linux/commit/849d18e27be9a1253f2318cb4549cc857219d991 CONFIG_MD_LINEAR Removal] Linear RAID has been deprecated since 2021 and removed on Kernel Version 6.8.}}


{{note|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity. To make ping work in the guest see [[Sysctl#Allow unprivileged users to create IPPROTO_ICMP sockets]].}}
You can also do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device:


This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.
First, you create some small file to hold the MBR:


QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.
$ dd if=/dev/zero of=''/path/to/mbr'' count=32


However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.
Here, a 16 KiB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:


{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}
# losetup -f ''/path/to/mbr''


=== Tap networking with QEMU ===
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hda''N''}} disk image using software RAID:


[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.
# modprobe linear
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''
 
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kibibyte-roundable offsets (such as 31.5 KiB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the host:


QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.
# fdisk /dev/md0


Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.


{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}
Now, press {{ic|R}} to return to the main menu.


As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode.  If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well.  Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:
Press {{ic|P}} and check that the cylinder size is now 16k.


-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.


But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:


  -device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on
  $ qemu-system-x86_64 -hdc /dev/md0 ''[...]''


See [https://web.archive.org/web/20160222161955/http://www.linux-kvm.com:80/content/how-maximize-virtio-net-performance-vhost-net] for more information.
You can, of course, safely set any boot loader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.


==== Host-only networking ====
===== Using a Network Block Device =====


If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].
With [https://docs.kernel.org/admin-guide/blockdev/nbd.html Network Block Device], Linux can use a remote server as one of its block device. You may use {{ic|nbd-server}} (from the {{Pkg|nbd}} package) to create an MBR wrapper for QEMU.


{{Tip|
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.
* See [[Network bridge]] for information on creating bridge.
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:


{{bc|1=
{{bc|1=
# ip addr add 172.20.0.1/16 dev br0
#!/bin/sh
# ip link set br0 up
dir="$(realpath "$(dirname "$0")")"
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254
cat >wrapper.conf <<EOF
}}
[generic]
allowlist = true
listenaddr = 127.713705
port = 10809
 
[wrap]
exportname = $dir/wrapper.img
multifile = true
EOF
 
nbd-server \
    -C wrapper.conf \
    -p wrapper.pid \
    "$@"
}}
}}


==== Internal networking ====
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:


If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''


By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:
=== Using an entire physical disk device inside the virtual machine ===


# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
{{Style|Duplicates [[#Using any real partition as the single primary partition of a hard disk image]], libvirt instructions do not belong to this page.}}


==== Bridged networking using qemu-bridge-helper ====
You may have a second disk with a different OS (like Windows) on it and may want to gain the ability to also boot it inside a virtual machine.
Since the disk access is raw, the disk will perform quite well inside the virtual machine.


{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}
==== Windows virtual machine boot prerequisites ====


This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.
Be sure to install the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/ virtio drivers] inside the OS on that disk before trying to boot it in the virtual machine.
For Win 7 use version [https://askubuntu.com/questions/1310440/using-virtio-win-drivers-with-win7-sp1-x64 0.1.173-4].
Some singular drivers from newer virtio builds may be used on Win 7 but you will have to install them manually via device manager.
For Win 10 you can use the latest virtio build.


{{Tip|See [[Network bridge]] for information on creating bridge.}}
===== Set up the windows disk interface drivers =====


First, create a configuration file containing the names of all bridges to be used by QEMU:
You may get a {{ic|0x0000007B}} bluescreen when trying to boot the virtual machine. This means Windows can not access the drive during the early boot stage because the disk interface driver it would need for that is not loaded / is set to start manually.


{{hc|/etc/qemu/bridge.conf|
The solution is to [https://superuser.com/a/1032769 enable these drivers to start at boot].
allow ''bridge0''
allow ''bridge1''
...}}


Now start the VM. The most basic usage would be:
In {{ic|HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services}}, find the folders {{ic|aliide, amdide, atapi, cmdide, iastor (may not exist), iastorV, intelide, LSI_SAS, msahci, pciide and viaide}}.
Inside each of those, set all their "start" values to 0 in order to enable them at boot.
If your drive is a PCIe NVMe drive, also enable that driver (should it exist).


$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''
==== Find the unique path of your disk ====


With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:
Run {{ic|ls /dev/disk/by-id/}}: tere you pick out the ID of the drive you want to insert into the virtual machine, for example {{ic|ata-TS512GMTS930L_C199211383}}.
Now add that ID to {{ic|/dev/disk/by-id/}} so you get {{ic|/dev/disk/by-id/ata-TS512GMTS930L_C199211383}}.
That is the unique path to that disk.


$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''
==== Add the disk in QEMU CLI ====


==== Creating bridge manually ====
In QEMU CLI that would probably be:


{{Style|This section needs serious cleanup and may contain out-of-date information.}}
{{ic|1=-drive file=/dev/disk/by-id/ata-TS512GMTS930L_C199211383,format=raw,media=disk}}


{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}
Just modify {{ic|file{{=}}}} to be the unique path of your drive.


The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.
==== Add the disk in libvirt ====


We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.
In libvirt XML that translates to


* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.
{{hc|$ virsh edit ''vmname''|<nowiki>
...
    <disk type="block" device="disk">
      <driver name="qemu" type="raw" cache="none" io="native"/>
      <source dev="/dev/disk/by-id/ata-TS512GMTS930L_C199211383"/>
      <target dev="sda" bus="sata"/>
      <address type="drive" controller="0" bus="0" target="0" unit="0"/>
    </disk>
...
</nowiki>}}


* Enable IPv4 forwarding:
Just modify "source dev" to be the unique path of your drive.
# sysctl -w net.ipv4.ip_forward=1


To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.
==== Add the disk in virt-manager ====


* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.
When creating a virtual machine, select "import existing drive" and just paste that unique path.
If you already have the virtual machine, add a device, storage, then select or create custom storage.
Now paste the unique path.


* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.
== Networking ==


* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}


{{hc|/etc/qemu-ifup|<nowiki>
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.
#!/bin/sh


echo "Executing /etc/qemu-ifup"
In addition, networking performance can be improved by assigning virtual machines a [https://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.
echo "Bringing up $1 for bridged mode..."
sudo /usr/bin/ip link set $1 up promisc on
echo "Adding $1 to br0..."
sudo /usr/bin/brctl addif br0 $1
sleep 2
</nowiki>}}


* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:
=== Link-level address caveat ===
{{hc|/etc/qemu-ifdown|<nowiki>
#!/bin/sh


echo "Executing /etc/qemu-ifdown"
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.
sudo /usr/bin/ip link set $1 down
sudo /usr/bin/brctl delif br0 $1
sudo /usr/bin/ip link delete dev $1
</nowiki>}}


* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:


{{bc|<nowiki>
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''
Cmnd_Alias      QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl
%kvm    ALL=NOPASSWD: QEMU
</nowiki>}}


* You launch QEMU using the following {{ic|run-qemu}} script:
Generating unique link-level addresses can be done in several ways:


{{hc|run-qemu|<nowiki>
* Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
#!/bin/bash
* Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:
USERID=$(whoami)


# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079
{{bc|1=
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
sudo /usr/bin/ip tuntap add user $USERID mode tap
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))
 
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''
# macaddr='52:54:be:36:42:a9'
}}


qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*
* Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.


sudo ip link set dev $IFACE down &> /dev/null
{{hc|qemu-mac-hasher.py|2=
sudo ip tuntap del $IFACE mode tap &> /dev/null
#!/usr/bin/env python
</nowiki>}}
# usage: qemu-mac-hasher.py <VMName>


Then to launch a VM, do something like this
import sys
import zlib


$ run-qemu -hda ''myvm.img'' -m 512
crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8")))).replace("x", "")[-8:]
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))
}}


* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:
In a script, you can use for example:


{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki>
vm_name="''VM Name''"
net.bridge.bridge-nf-call-ip6tables = 0
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
</nowiki>}}


Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.
=== User-mode networking ===


See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module loading with systemd]].
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.


Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:
{{Note|ICMPv6 will not work, as support for it is not implemented: {{ic|Slirp: external icmpv6 not supported yet}}. [[Ping]]ing an IPv6 address will not work.}}


-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.


==== Network sharing between physical device and a Tap device through iptables ====
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.


{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.


Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup.  However if the host gets connected to the network through a wireless device, then bridging is not possible.
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}


See [[Network bridge#Wireless interface on a bridge]] as a reference.
{{Tip|
* To use the virtio driver with user-mode networking, the option is: {{ic|1=-nic user,model=virtio-net-pci}}.
* You can isolate user-mode networking from the host and the outside world by adding {{ic|1=restrict=y}}, for example: {{ic|1=-net user,restrict=y}}
}}


One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.
=== Tap networking with QEMU ===


See [[Internet sharing]] as a reference.
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.


There you can find what is needed to share the network between devices, included tap and tun ones.  The following just hints further on some of the host configurations required.  As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.


To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.


net.ipv4.ip_forward = 1
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1


The iptables rules can look like:
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:


  # Forwarding from/to outside
  -device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT
# NAT/Masquerade (network address translation)
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE


The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:


  INT=tap0
  -device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on
EXT_0=eth0
EXT_1=wlan0
EXT_2=tun0


The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.
See [https://web.archive.org/web/20160222161955/http://www.linux-kvm.com:80/content/how-maximize-virtio-net-performance-vhost-net] for more information.


The forwarding rules shown are stateless, and for pure forwarding.  One could think of restricting specific traffic, putting a firewall in place to protect the guest and others.  However those would decrease the networking performance, while a simple bridge does not include any of that.
==== Host-only networking ====


Bonus:  Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest.  This avoids the need for the guest to also open a VPN connection.  Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].


=== Networking with VDE2 ===
{{Tip|
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.
* See [[Network bridge]] for information on creating bridge.
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:


{{Style|This section needs serious cleanup and may contain out-of-date information.}}
{{bc|1=
# ip addr add 172.20.0.1/16 dev br0
# ip link set br0 up
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254
}}
}}


==== What is VDE? ====
==== Internal networking ====


VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.


The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [https://wiki.virtualsquare.org/ the documentation of the project].
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:


The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT


==== Basics ====
==== Bridged networking using qemu-bridge-helper ====


VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.


In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):
{{Tip|
* See [[Network bridge]] for information on creating bridge.
* See https://wiki.qemu.org/Features/HelperNetworking for more information on QEMU's network helper.
}}


# modprobe tun
First, create a configuration file containing the names of all bridges to be used by QEMU:


Now create the virtual switch:
{{hc|/etc/qemu/bridge.conf|
allow ''br0''
allow ''br1''
...}}


# vde_switch -tap tap0 -daemon -mod 660 -group users
Make sure {{ic|/etc/qemu/}} has {{ic|755}} [[permissions]]. [https://gitlab.com/qemu-project/qemu/-/issues/515 QEMU issues] and [https://www.gns3.com/community/discussions/gns3-cannot-work-with-qemu GNS3 issues] may arise if this is not the case.


This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.
Now start the virtual machine; the most basic usage to run QEMU with the default network helper and default bridge {{ic|br0}}:


The interface is plugged in but not configured yet. To configure it, run this command:
$ qemu-system-x86_64 -nic bridge ''[...]''


# ip addr add 192.168.100.254/24 dev tap0
Using the bridge {{ic|br1}} and the virtio driver:


Now, you just have to run KVM with these {{ic|-net}} options as a normal user:
$ qemu-system-x86_64 -nic bridge,br=''br1'',model=virtio-net-pci ''[...]''


$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''
==== Creating bridge manually ====


Configure networking for your guest as you would do in a physical network.
{{Style|This section needs serious cleanup and may contain out-of-date information.}}


{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}
{{Tip|Since QEMU 1.1, the [https://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}


==== Startup scripts ====
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.


Example of main script starting VDE:
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.


{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki>
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.
#!/bin/sh
# QEMU/VDE network environment preparation script


# The IP configuration for the tap device that will be used for
* Enable IPv4 forwarding:
# the virtual machine network:


TAP_DEV=tap0
# sysctl -w net.ipv4.ip_forward=1
TAP_IP=192.168.100.254
TAP_MASK=24
TAP_NETWORK=192.168.100.0


# Host interface
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.
NIC=eth0


case "$1" in
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.
  start)
        echo -n "Starting VDE network for QEMU: "


        # If you want tun kernel module to be loaded by script uncomment here
* Optionally create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name. In the {{ic|run-qemu}} script below, {{ic|br0}} is set up if not listed, as it is assumed that by default the host is not accessing network via the bridge.
#modprobe tun 2>/dev/null
## Wait for the module to be loaded
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done


        # Start tap switch
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:
        vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users


        # Bring tap interface up
{{hc|/etc/qemu-ifup|<nowiki>
        ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"
#!/bin/sh
        ip link set "$TAP_DEV" up


        # Start IP Forwarding
echo "Executing /etc/qemu-ifup"
        echo "1" > /proc/sys/net/ipv4/ip_forward
echo "Bringing up $1 for bridged mode..."
        iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE
sudo /usr/bin/ip link set $1 up promisc on
        ;;
echo "Adding $1 to br0..."
  stop)
sudo /usr/bin/brctl addif br0 $1
        echo -n "Stopping VDE network for QEMU: "
sleep 2
        # Delete the NAT rules
</nowiki>}}
        iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE


        # Bring tap interface down
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:
        ip link set "$TAP_DEV" down
{{hc|/etc/qemu-ifdown|<nowiki>
#!/bin/sh


        # Kill VDE switch
echo "Executing /etc/qemu-ifdown"
        pgrep vde_switch | xargs kill -TERM
sudo /usr/bin/ip link set $1 down
        ;;
sudo /usr/bin/brctl delif br0 $1
  restart|reload)
sudo /usr/bin/ip link delete dev $1
        $0 stop
        sleep 1
        $0 start
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|reload}"
        exit 1
esac
exit 0
</nowiki>}}
</nowiki>}}


Example of systemd service using the above script:
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:


{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki>
{{bc|<nowiki>
[Unit]
Cmnd_Alias      QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl
Description=Manage VDE Switch
%kvm    ALL=NOPASSWD: QEMU
 
[Service]
Type=oneshot
ExecStart=/etc/systemd/scripts/qemu-network-env start
ExecStop=/etc/systemd/scripts/qemu-network-env stop
RemainAfterExit=yes
 
[Install]
WantedBy=multi-user.target
</nowiki>}}
</nowiki>}}


Change permissions for {{ic|qemu-network-env}} to be executable
* You launch QEMU using the following {{ic|run-qemu}} script:


# chmod u+x /etc/systemd/scripts/qemu-network-env
{{hc|run-qemu|<nowiki>
#!/bin/bash
: '
e.g. with img created via:
qemu-img create -f qcow2 example.img 90G
run-qemu -cdrom archlinux-x86_64.iso -boot order=d -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4
run-qemu -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4
'


You can [[start]] {{ic|qemu-network-env.service}} as usual.
nicbr0() {
    sudo ip link set dev $1 promisc on up &> /dev/null
    sudo ip addr flush dev $1 scope host &>/dev/null
    sudo ip addr flush dev $1 scope site &>/dev/null
    sudo ip addr flush dev $1 scope global &>/dev/null
    sudo ip link set dev $1 master br0 &> /dev/null
}
_nicbr0() {
    sudo ip link set $1 promisc off down &> /dev/null
    sudo ip link set dev $1 nomaster &> /dev/null
}


====Alternative method====
HASBR0="$( ip link show | grep br0 )"
if [ -z $HASBR0 ] ; then
    ROUTER="192.168.1.1"
    SUBNET="192.168.1."
    NIC=$(ip link show | grep en | grep 'state UP' | head -n 1 | cut -d":" -f 2 | xargs)
    IPADDR=$(ip addr show | grep -o "inet $SUBNET\([0-9]*\)" | cut -d ' ' -f2)
    sudo ip link add name br0 type bridge &> /dev/null
    sudo ip link set dev br0 up
    sudo ip addr add $IPADDR/24 brd + dev br0
    sudo ip route del default &> /dev/null
    sudo ip route add default via $ROUTER dev br0 onlink
    nicbr0 $NIC
    sudo iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
fi


If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.
USERID=$(whoami)
precreationg=$(ip tuntap list | cut -d: -f1 | sort)
sudo ip tuntap add user $USERID mode tap
postcreation=$(ip tuntap list | cut -d: -f1 | sort)
TAP=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))
nicbr0 $TAP


# vde_switch -daemon -mod 660 -group users
printf -v MACADDR "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
# slirpvde --dhcp --daemon
qemu-system-x86_64 -net nic,macaddr=$MACADDR,model=virtio \
    -net tap,ifname=$TAP,script=no,downscript=no,vhost=on \
    $@


Then, to start the VM with a connection to the network of the host:
_nicbr0 $TAP
sudo ip link set dev $TAP down &> /dev/null
sudo ip tuntap del $TAP mode tap


$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''
if [ -z $HASBR0 ] ; then
    _nicbr0 $NIC
    sudo ip addr del dev br0 $IPADDR/24 &> /dev/null
    sudo ip link set dev br0 down
    sudo ip link delete br0 type bridge &> /dev/null
    sudo ip route del default &> /dev/null
    sudo ip link set dev $NIC up
    sudo ip route add default via $ROUTER dev $NIC onlink &> /dev/null
fi
</nowiki>}}


=== VDE2 Bridge ===
Then to launch a virtual machine, do something like this


Based on [https://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.
$ run-qemu -hda ''myvm.img'' -m 512


==== Basics ====
* It is recommended for performance and security reasons to disable the [https://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:


Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki>
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
</nowiki>}}


Create the vde2/tap device:
In order to apply the parameters described above on boot, you will also need to load the br-netfilter module on boot. Otherwise, the parameters will not exist when sysctl will try to modify them.
 
{{hc|/etc/modules-load.d/br_netfilter.conf|<nowiki>
br_netfilter
</nowiki>}}


# vde_switch -tap tap0 -daemon -mod 660 -group users
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.
# ip link set tap0 up


Create bridge:
See the [https://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel module#systemd]].


# brctl addbr br0
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:


Add devices:
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT


# brctl addif br0 eth0
==== Network sharing between physical device and a Tap device through iptables ====
# brctl addif br0 tap0


And configure bridge interface:
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}


# dhcpcd br0
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.


==== Startup scripts ====
See [[Network bridge#Wireless interface on a bridge]] as a reference.


All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.


{{hc|/etc/netctl/ethernet-noip|2=
See [[Internet sharing]] as a reference.
Description='A more versatile static Ethernet connection'
Interface=eth0
Connection=ethernet
IP=no
}}


The following custom systemd service can be used to create and activate a VDE2 tap interface for users in the {{ic|users}} user group.
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.


{{hc|/etc/systemd/system/vde2@.service|2=
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:
[Unit]
Description=Network Connectivity for %i
Wants=network.target
Before=network.target


[Service]
net.ipv4.ip_forward = 1
Type=oneshot
net.ipv6.conf.default.forwarding = 1
RemainAfterExit=yes
net.ipv6.conf.all.forwarding = 1
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users
ExecStart=/usr/bin/ip link set dev %i up
ExecStop=/usr/bin/ip addr flush dev %i
ExecStop=/usr/bin/ip link set dev %i down


[Install]
The iptables rules can look like:
WantedBy=multi-user.target
}}


And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].
# Forwarding from/to outside
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT
# NAT/Masquerade (network address translation)
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE


=== Shorthand configuration ===
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:


If you are using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:
INT=tap0
EXT_0=eth0
EXT_1=wlan0
EXT_2=tun0


-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net-pci,netdev=network0
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.


become:
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.


  -nic tap,script=no,downscript=no,vhost=on,model=virtio-net-pci
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.


Notice the lack of network IDs, and that the device was created with {{ic|1=model=}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|1=model=}}) are related with the device. The same parameters (for example, {{ic|1=smb=}}) are used. To completely disable the networking use {{ic|-nic none}}.
=== Networking with VDE2 ===


See [https://qemu.weilnetz.de/doc/qemu-doc.html#Network-options QEMU networking documentation] for more information on parameters you can use.
{{Style|This section needs serious cleanup and may contain out-of-date information.}}


== Graphic card ==
==== What is VDE? ====


QEMU can emulate a standard graphic card text mode using {{ic|-curses}} command line option. This allows to type text and see text output directly inside a text terminal.
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.


QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [https://wiki.virtualsquare.org/ the documentation of the project].


=== std ===
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.


With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.
==== Basics ====


=== qxl ===
VDE support can be [[install]]ed via the {{Pkg|vde2}} package.


QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):


On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.
# modprobe tun


Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor support|increase vga_memmb]].
Now create the virtual switch:


=== vmware ===
# vde_switch -tap tap0 -daemon -mod 660 -group users


Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.


=== virtio ===
The interface is plugged in but not configured yet. To configure it, run this command:


{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.
# ip addr add 192.168.100.254/24 dev tap0


To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:


{{hc|$ dmesg {{!}} grep drm |
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''
[drm] pci: virtio-vga detected
[drm] virgl 3d acceleration enabled
}}


=== cirrus ===
Configure networking for your guest as you would do in a physical network.


The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}


=== none ===
==== Startup scripts ====


This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.
Example of main script starting VDE:


== SPICE ==
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki>
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.
#!/bin/sh
=== Enabling SPICE support on the host ===
# QEMU/VDE network environment preparation script
The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:


$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent
# The IP configuration for the tap device that will be used for
# the virtual machine network:


The parameters have the following meaning:
TAP_DEV=tap0
TAP_IP=192.168.100.254
TAP_MASK=24
TAP_NETWORK=192.168.100.0


# {{ic|-device virtio-serial-pci}} adds a virtio-serial device
# Host interface
# {{ic|1=-spice port=5930,disable-ticketing}} set TCP port {{ic|5930}} for spice channels listening and allow client to connect without authentication{{Tip|Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system. It does not imply that packets are encapsulated and decapsulated to use the network and the related protocol. The sockets are identified solely by the inodes on the hard drive. It is therefore considered better for performance. Use instead {{ic|1=-spice unix,addr=/tmp/vm_spice.socket,disable-ticketing}}.}}
NIC=eth0
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.


=== Connecting to the guest with a SPICE client ===
case "$1" in
  start)
        echo -n "Starting VDE network for QEMU: "


A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:
        # If you want tun kernel module to be loaded by script uncomment here
 
#modprobe tun 2>/dev/null
{{App|virt-viewer|SPICE client recommended by the protocol developers, a subset of the virt-manager project.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}
## Wait for the module to be loaded
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done


{{App|spice-gtk|SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.|https://www.spice-space.org/|{{Pkg|spice-gtk}}}}
        # Start tap switch
        vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users


For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [http://www.spice-space.org/download.html spice-space download].
        # Bring tap interface up
        ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"
        ip link set "$TAP_DEV" up


==== Manually running a SPICE client ====
        # Start IP Forwarding
        echo "1" > /proc/sys/net/ipv4/ip_forward
        iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE
        ;;
  stop)
        echo -n "Stopping VDE network for QEMU: "
        # Delete the NAT rules
        iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE


One way of connecting to a guest listening on Unix socket {{ic|/tmp/vm_spice.socket}} is to manually run the SPICE client using {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}, depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.
        # Bring tap interface down
        ip link set "$TAP_DEV" down


{{Tip|To connect to the guest through SSH tunelling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}
        # Kill VDE switch
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.
        pgrep vde_switch | xargs kill -TERM
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.
        ;;
}}
  restart|reload)
 
        $0 stop
==== Running a SPICE client with QEMU ====
        sleep 1
        $0 start
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|reload}"
        exit 1
esac
exit 0
</nowiki>}}


QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the {{ic|-display spice-app}} parameter. This will use the system's default SPICE client as the viewer, determined by your [[XDG_MIME_Applications#mimeapps.list|mimeapps.list]] files.
Example of systemd service using the above script:


=== Enabling SPICE support on the guest ===
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki>
[Unit]
Description=Manage VDE Switch


For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:
[Service]
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more.
Type=oneshot
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver
ExecStart=/etc/systemd/scripts/qemu-network-env start
For guests under '''other operating systems''', refer to the ''Guest'' section in [http://www.spice-space.org/download.html spice-space download].
ExecStop=/etc/systemd/scripts/qemu-network-env stop
RemainAfterExit=yes


=== Password authentication with SPICE ===
[Install]
WantedBy=multi-user.target
</nowiki>}}


If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:
Change permissions for {{ic|qemu-network-env}} to be [[executable]].  


$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent
You can [[start]] {{ic|qemu-network-env.service}} as usual.


Your SPICE client should now ask for the password to be able to connect to the SPICE server.
==== Alternative method ====


=== TLS encrypted communication with SPICE ===
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.


You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):
# vde_switch -daemon -mod 660 -group users
# slirpvde --dhcp --daemon


* {{ic|ca-cert.pem}}: the CA master certificate.
Then, to start the virtual machine with a connection to the network of the host:
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.
* {{ic|server-key.pem}}: the server private key.


An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''


Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.
=== VDE2 Bridge ===


It is now possible to connect to the server using {{pkg|virt-viewer}}:
Based on [https://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.


$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all
==== Basics ====


Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.


{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}
Create the vde2/tap device:
}}


The equivalent {{Pkg|spice-gtk}} command is:
# vde_switch -tap tap0 -daemon -mod 660 -group users
# ip link set tap0 up


$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all
Create bridge:


== VNC ==
# brctl addbr br0


One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).
Add devices:


  $ qemu-system-x86_64 -vnc :0
  # brctl addif br0 eth0
# brctl addif br0 tap0


An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.
And configure bridge interface:
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}


=== Basic password authentication ===
# dhcpcd br0


An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.
==== Startup scripts ====


$ qemu-system-x86_64 -vnc :0,password -monitor stdio
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:


In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.
{{hc|/etc/netctl/ethernet-noip|2=
Description='A more versatile static Ethernet connection'
Interface=eth0
Connection=ethernet
IP=no
}}


The following command line directly runs vnc with a password:
The following custom systemd service can be used to create and activate a VDE2 tap interface for users in the {{ic|users}} user group.


$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio
{{hc|/etc/systemd/system/vde2@.service|2=
 
[Unit]
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}
Description=Network Connectivity for %i
Wants=network.target
Before=network.target


== Audio ==
[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users
ExecStart=/usr/bin/ip link set dev %i up
ExecStop=/usr/bin/ip addr flush dev %i
ExecStop=/usr/bin/ip link set dev %i down


=== Host ===
[Install]
WantedBy=multi-user.target
}}


The {{ic|-audiodev}} flag sets the audio backend driver and its options. The list of available audio backend drivers and their optional settings is detailed in the {{man|1|qemu}}. man page.
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].


At the bare minimum, you need to choose a driver and set an id.
=== Shorthand configuration ===


-audiodev pa,id=snd0
If you are using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:


=== Guest ===
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net-pci,netdev=network0


==== With audiodev ====
become:


===== Intel HD Audio =====
-nic tap,script=no,downscript=no,vhost=on,model=virtio-net-pci


For Intel HD Audio emulation add both controller and codec devices. To list the available Intel HDA Audio devices:
Notice the lack of network IDs, and that the device was created with {{ic|1=model=}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|1=model=}}) are related with the device. The same parameters (for example, {{ic|1=smb=}}) are used. To completely disable the networking use {{ic|-nic none}}.


$ qemu-system-x86_64 -device help | grep hda
See [https://qemu.weilnetz.de/doc/6.0/system/net.html QEMU networking documentation] for more information on parameters you can use.


Add the audio controller
== Graphic card ==


-device ich9-intel-hda
QEMU can emulate a standard graphic card text mode using {{ic|-display curses}} command line option. This allows to type text and see text output directly inside a text terminal. Alternatively, {{ic|-nographic}} serves a similar purpose.


Add the audio codec and map it to a host audio backend id
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.


-device hda-output,audiodev=snd0
=== std ===


===== Intel 82801AA AC97 =====
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.


For AC97 emulation just add the audio card device and map it to a host audio backend id
=== qxl ===


-device AC97,audiodev=snd0
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.


==== Without audiodev ====
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.


To get list of the supported emulation audio drivers:
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor support|increase vga_memmb]].
$ qemu-system-x86_64 -soundhw help


To use e.g. {{ic|hda}} driver for the guest use the {{ic|-device intel-hda -device hda-duplex}} command with QEMU.
=== vmware ===


{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.


== Installing virtio drivers ==
=== virtio ===


QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. It's mature, currently supporting only Linux guests with {{Pkg|mesa}} compiled with the option {{ic|1=gallium-drivers=virgl}}.


* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:
To enable 3D acceleration on the guest system, select this vga with {{ic|-device virtio-vga-gl}} and enable the OpenGL context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the SDL and GTK display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio


* Almost the same goes for the network:
{{hc|# dmesg {{!}} grep drm |
$ qemu-system-x86_64 -nic user,model=virtio-net-pci
[drm] pci: virtio-vga detected
[drm] virgl 3d acceleration enabled
}}


{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}
=== cirrus ===


=== Preparing an (Arch) Linux guest ===
The cirrus graphical adapter was the default [https://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.


To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.
=== none ===


If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.


{{hc|/etc/mkinitcpio.conf|2=
== SPICE ==
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}


Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.
The [https://www.spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.


{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}
=== Enabling SPICE support on the host ===


Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].
The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:


You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-guest-agent.service}}.
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent


=== Preparing a Windows guest ===
The parameters have the following meaning:


==== Block device drivers ====
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device
# {{ic|1=-spice port=5930,disable-ticketing=on}} set TCP port {{ic|5930}} for spice channels listening and allow client to connect without authentication{{Tip|Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system. It does not imply that packets are encapsulated and decapsulated to use the network and the related protocol. The sockets are identified solely by the inodes on the hard drive. It is therefore considered better for performance. Use instead {{ic|1=-spice unix=on,addr=/tmp/vm_spice.socket,disable-ticketing=on}}.}}
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.


===== New Install of Windows =====
=== Connecting to the guest with a SPICE client ===


Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:


The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.
* {{App|virt-viewer|SPICE client recommended by the protocol developers, a subset of the virt-manager project.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}
* {{App|spice-gtk|SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.|https://www.spice-space.org/|{{Pkg|spice-gtk}}}}


The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [https://www.spice-space.org/download.html spice-space download].


$ qemu-system-x86_64 ... \
==== Manually running a SPICE client ====
-drive file=''windows_disk_image'',index=0,media=disk,if=virtio \
-drive file=''windows.iso'',index=2,media=cdrom \
-drive file=''virtio.iso'',index=3,media=cdrom \
...


During the installation, at some stage, the Windows installer will ask "Where do you want to install Windows?", it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).
One way of connecting to a guest listening on Unix socket {{ic|/tmp/vm_spice.socket}} is to manually run the SPICE client using {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}, depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.


* Select the option ''Load Drivers''.
{{Tip|
* Uncheck the box for ''Hide drivers that aren't compatible with this computer's hardware''.
To connect to the guest through SSH tunneling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}
* Click the browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and confirm.
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.
}}


You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.
==== Running a SPICE client with QEMU ====


===== Change Existing Windows VM to use virtio =====
QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the {{ic|-display spice-app}} parameter. This will use the system's default SPICE client as the viewer, determined by your [[XDG MIME Applications#mimeapps.list|mimeapps.list]] files.


Modifying an existing Windows guest for booting from virtio disk requires that the virtio driver is available in the guest and that it is loaded at boot time.
=== Enabling SPICE support on the guest ===


One can find the virtio disk driver in the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. (Refer to this [https://github.com/systemd/systemd/issues/18791 issue], until fixed, for workarounds to get this to work on non-GNOME desktops.)
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver
* {{AUR|x-resize}}: Desktop environments other than GNOME do not react automatically when the SPICE client window is resized. This package uses a [[udev]] rule and [[xrandr]] to implement auto-resizing for all X11-based desktop environments and window managers.
For guests under '''other operating systems''', refer to the ''Guest'' section in spice-space [https://www.spice-space.org/download.html download].


Now, create a new disk image that will be used to search for the virtio driver:
=== Password authentication with SPICE ===


$ qemu-img create -f qcow2 ''fake.qcow2'' 1G
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:


Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent


$ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-185.iso
Your SPICE client should now ask for the password to be able to connect to the SPICE server.


Windows will detect the fake disk and look for a suitable driver. If it fails, go to ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1).
=== TLS encrypted communication with SPICE ===


Request Windows to boot in safe mode next time it starts up. This can be done using the ''msconfig.exe'' tool in Windows. In safe mode all the drivers will be loaded at boot time including the new virtio driver. Once Windows knows that the virtio driver is required at boot it will memorize it for future boot.
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):


Once instructed to boot in safe mode, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:
* {{ic|ca-cert.pem}}: the CA master certificate.
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.
* {{ic|server-key.pem}}: the server private key.


$ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=virtio
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].


You should boot in safe mode with virtio driver loaded, you can now return to ''msconfig.exe'' disable safe mode boot and restart Windows.
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.


{{Note|If you encounter the blue screen of death using the {{ic|1=if=virtio}} parameter, it probably means the virtio disk driver is not installed or not loaded at boot time, reboot in safe mode and check your driver configuration.}}
It is now possible to connect to the server using {{Pkg|virt-viewer}}:


==== Network drivers ====
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all


Installing virtio network drivers is a bit easier, simply add the {{ic|-nic}} argument.
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.


$ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=virtio -nic user,model=virtio-net-pci -cdrom virtio-win-0.1-185.iso
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}
}}


Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.
The equivalent {{Pkg|spice-gtk}} command is:


==== Balloon driver ====
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all


If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.
== VNC ==


For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and do not forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still will not be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).


=== Preparing a FreeBSD guest ===
$ qemu-system-x86_64 -vnc :0


Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.


{{bc|1=
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}
virtio_load="YES"
virtio_pci_load="YES"
virtio_blk_load="YES"
if_vtnet_load="YES"
virtio_balloon_load="YES"
}}


Then modify your {{ic|/etc/fstab}} by doing the following:
=== Basic password authentication ===


# sed -ibak "s/ada/vtbd/g" /etc/fstab
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.


And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.
$ qemu-system-x86_64 -vnc :0,password -monitor stdio


== QEMU monitor ==
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.


While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].
The following command line directly runs vnc with a password:


=== Accessing the monitor console ===
$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio


==== Graphical view ====
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}


When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.
== Audio ==


However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports.
=== Creating an audio backend ===


==== Telnet ====
The {{ic|-audiodev}} flag sets the audio backend driver on the host and its options.


To enable [[telnet]], run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:
To list availabe audio backend drivers:


  $ telnet 127.0.0.1 ''port''
  $ qemu-system-x86_64 -audiodev help


{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}
Their optional settings are detailed in the {{man|1|qemu}} man page.


==== UNIX socket ====
At the bare minimum, one need to choose an audio backend and set an id, for [[PulseAudio]] for example:


Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}}, {{pkg|nmap}} or {{pkg|openbsd-netcat}}.
-audiodev pa,id=snd0


For example, if QEMU is run via:
=== Using the audio backend ===


$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''
==== Intel HD Audio ====


It is possible to connect to the monitor with:
For Intel HD Audio emulation, add both controller and codec devices. To list the available Intel HDA Audio devices:


  $ socat - UNIX-CONNECT:/tmp/monitor.sock
  $ qemu-system-x86_64 -device help | grep hda


Or with:
Add the audio controller:


  $ nc -U /tmp/monitor.sock
  -device ich9-intel-hda


Alternatively with {{pkg|nmap}}:
Also, add the audio codec and map it to a host audio backend id:


  $ ncat -U /tmp/monitor.sock
  -device hda-output,audiodev=snd0


==== TCP ====
==== Intel 82801AA AC97 ====


You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:
For AC97 emulation just add the audio card device and map it to a host audio backend id:


  $ nc 127.0.0.1 ''port''
  -device AC97,audiodev=snd0


{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}
{{Note|
* If the audiodev backend is not provided, QEMU looks up for it and adds it automatically, this only works for a single audiodev. For example {{ic|-device intel-hda -device hda-duplex}} will emulate {{ic|intel-hda}} on the guest using the default audiodev backend.
* Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.
}}


==== Standard I/O ====
==== VirtIO sound ====


It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.
VirtIO sound is also available since QEMU 8.2.0. The usage is:


=== Sending keyboard presses to the virtual machine using the monitor console ===
-device virtio-sound-pci,audiodev=my_audiodev -audiodev alsa,id=my_audiodev


Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:
More information can be found in [https://qemu-project.gitlab.io/qemu/system/devices/virtio-snd.html QEMU documentation].


(qemu) sendkey ctrl-alt-f2
== Installing virtio drivers ==


=== Creating and managing snapshots via the monitor console ===
QEMU offers guests the ability to use paravirtualized block and network devices using the [https://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.


{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}
* A virtio block device requires the option {{ic|-drive}} for passing a disk image, with parameter {{ic|1=if=virtio}}:
$ qemu-system-x86_64 -drive file=''disk_image'',if='''virtio'''


It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.
* Almost the same goes for the network:
$ qemu-system-x86_64 -nic user,model='''virtio-net-pci'''


* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).


=== Running the virtual machine in immutable mode ===
=== Preparing an Arch Linux guest ===


It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{ic|virtio}}, {{ic|virtio_pci}}, {{ic|virtio_blk}}, {{ic|virtio_net}}, and {{ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.


However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.


(qemu) commit all
{{hc|/etc/mkinitcpio.conf|2=
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}


If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.


=== Pause and power options via the monitor console ===
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and boot loader, nothing has to be done.}}


Some operations of a physical machine can be emulated by QEMU using some monitor commands:
Further information on paravirtualization with KVM can be found [https://www.linux-kvm.org/page/Boot_from_virtio_block_device here].


* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities.
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.
* {{ic|stop}} will pause the virtual machine.
* {{ic|cont}} will resume a virtual machine previously paused.


=== Taking screenshots of the virtual machine ===
=== Preparing a Windows guest ===


Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:
==== Virtio drivers for Windows ====


(qemu) screendump ''file.ppm''
Windows does not come with the virtio drivers. The latest and stable versions of the drivers are regularly built by Fedora, details on downloading the drivers are given on [https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md virtio-win on GitHub]. In the following sections we will mostly use the stable ISO file provided here: [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso virtio-win.iso]. Alternatively, use {{AUR|virtio-win}}.


== QEMU machine protocol ==
==== Block device drivers ====


The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].
===== New Install of Windows =====


=== Start QMP ===
The drivers need to be loaded during installation, the procedure is to load the ISO image with the virtio drivers in a cdrom device along with the primary disk device and the Windows ISO install media:


The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:
$ qemu-system-x86_64 ... \
-drive file=''disk_image'',index=0,media=disk,if=virtio \
-drive file=''windows.iso'',index=2,media=cdrom \
-drive file=''virtio-win.iso'',index=3,media=cdrom \
...


$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait
During the installation, at some stage, the Windows installer will ask "Where do you want to install Windows?", it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).


Then one way to communicate with the QMP agent is to use [[netcat]]:
* Select the option ''Load Drivers''.
* Uncheck the box for ''Hide drivers that are not compatible with this computer's hardware''.
* Click the browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and confirm.


{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.


At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:
===== Change existing Windows virtual machine to use virtio =====


{"execute": "qmp_capabilities"}
Modifying an existing Windows guest for booting from virtio disk requires that the virtio driver is loaded by the guest at boot time.
We will therefore need to teach Windows to load the virtio driver at boot time before being able to boot a disk image in virtio mode.


Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:
To achieve that, first create a new disk image that will be attached in virtio mode and trigger the search for the driver:


  {"execute": "query-commands"}
  $ qemu-img create -f qcow2 ''dummy.qcow2'' 1G


=== Live merging of child image into parent image ===
Run the original Windows guest with the boot disk still in IDE mode, the fake disk in virtio mode and the driver ISO image.


It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:
$ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=ide -drive file=''dummy.qcow2'',if=virtio -cdrom virtio-win.iso


{"execute": "block-commit", "arguments": {"device": "''devicename''"}}
Windows will detect the fake disk and look for a suitable driver. If it fails, go to ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1).


Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.
Request Windows to boot in safe mode next time it starts up. This can be done using the ''msconfig.exe'' tool in Windows. In safe mode all the drivers will be loaded at boot time including the new virtio driver. Once Windows knows that the virtio driver is required at boot it will memorize it for future boot.


Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:
Once instructed to boot in safe mode, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:


  {"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}
  $ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=virtio


Until such a command is issued, the ''commit'' operation remains active.
You should boot in safe mode with virtio driver loaded, you can now return to ''msconfig.exe'' disable safe mode boot and restart Windows.
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.


{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}
{{Note|If you encounter the blue screen of death using the {{ic|1=if=virtio}} parameter, it probably means the virtio disk driver is not installed or not loaded at boot time, reboot in safe mode and check your driver configuration.}}


=== Live creation of a new snapshot ===
==== Network drivers ====


To create a new snapshot out of a running image, run the command:
Installing virtio network drivers is a bit easier, simply add the {{ic|-nic}} argument.


  {"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}
  $ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=virtio -nic user,model=virtio-net-pci -cdrom virtio-win.iso


This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.


== Tips and tricks ==
==== Balloon driver ====
=== Improve virtual machine performance ===


There are a number of techniques that you can use to improve the performance of the virtual machine. For example:
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.


* Apply [[#Enabling KVM]]: add {{ic|-enable-kvm}} to the QEMU start command you use.
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and do not forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still will not be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.
* If the host machine has multiple cores, assign the guest more cores using the {{ic|-smp}} option.
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.
* If you have a raw disk image, you may want to disable the cache:
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''
* Use the native Linux AIO:
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0


See http://www.linux-kvm.org/page/Tuning_KVM for more information.
=== Preparing a FreeBSD guest ===


=== Starting QEMU virtual machines on boot ===
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:


==== With libvirt ====
{{bc|1=
virtio_load="YES"
virtio_pci_load="YES"
virtio_blk_load="YES"
if_vtnet_load="YES"
virtio_balloon_load="YES"
}}


If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".
Then modify your {{ic|/etc/fstab}} by doing the following:


==== With systemd service ====
# sed -ibak "s/ada/vtbd/g" /etc/fstab


To run QEMU VMs on boot, you can use following systemd unit and config.
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.


{{hc|/etc/systemd/system/qemu@.service|2=
== QEMU monitor ==
[Unit]
Description=QEMU virtual machine


[Service]
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://www.qemu.org/docs/master/system/monitor.html official QEMU documentation].
Environment="haltcmd=kill -INT $MAINPID"
EnvironmentFile=/etc/conf.d/qemu.d/%i
ExecStart=/usr/bin/qemu-system-x86_64 -name %i -enable-kvm -m 512 -nographic $args
ExecStop=/usr/bin/bash -c ${haltcmd}
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'


[Install]
=== Accessing the monitor console ===
WantedBy=multi-user.target
}}


{{Note|This service will wait for the console port to be released, which means that the VM has been shutdown, to graciously end.}}
==== Graphical view ====


Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|args}} and {{ic|haltcmd}} set. Example configs:
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.


{{hc|/etc/conf.d/qemu.d/one|2=
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports.
args="-hda /dev/vg0/vm1 -serial telnet:localhost:7000,server,nowait,nodelay \
 
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"
==== Telnet ====


haltcmd="echo 'system_powerdown' {{!}} nc localhost 7100" # or netcat/ncat}}
To enable [[telnet]], run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:


{{hc|/etc/conf.d/qemu.d/two|2=
$ telnet 127.0.0.1 ''port''
args="-hda /srv/kvm/vm2 -serial telnet:localhost:7001,server,nowait,nodelay -vnc :1"


haltcmd="ssh powermanager@vm2 sudo poweroff"}}
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}


The description of the variables is the following:
==== UNIX socket ====


* {{ic|args}} - QEMU command line arguments to be used.
Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{Pkg|socat}}, {{Pkg|nmap}} or {{Pkg|openbsd-netcat}}.
* {{ic|haltcmd}} - Command to shut down a VM safely. In the first example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. In the other example, SSH is used.


To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.
For example, if QEMU is run via:


=== Mouse integration ===
$ qemu-system-x86_64 -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''


To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:
It is possible to connect to the monitor with:


  $ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet
  $ socat - UNIX-CONNECT:/tmp/monitor.sock


If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].
Or with:


=== Pass-through host USB device ===
$ nc -U /tmp/monitor.sock


It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the {{ic|lsusb}} command. For example:
Alternatively with {{Pkg|nmap}}:


{{hc|$ lsusb|
$ ncat -U /tmp/monitor.sock
...
Bus '''003''' Device '''007''': ID '''0781''':'''5406''' SanDisk Corp. Cruzer Micro U3
}}


The outputs in bold above will be useful to identify respectively the ''host_bus'' and ''host_addr'' or the ''vendor_id'' and ''product_id''.
==== TCP ====


In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device qemu-xhci,id=xhci}} respectively and then attach the physical device to it with the option {{ic|1=-device usb-host,..}}. We will consider that ''controller_id'' is either {{ic|ehci}} or {{ic|xhci}} for the rest of this section.
You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{Pkg|openbsd-netcat}} or {{Pkg|gnu-netcat}} by running:


Then, there are two ways to connect to the USB of the host with qemu:
$ nc 127.0.0.1 ''port''


# Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is: {{bc|1=-device usb-host,bus=''controller_id''.0,vendorid=0x''vendor_id'',productid=0x''product_id''}}Applied to the device used in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x'''0781''',productid=0x'''5406'''}}One can also add the {{ic|1=...,port=''port_number''}} setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one wants to add multiple usb devices to the VM. Another option is to use the new {{ic|hostdevice}} property of {{ic|usb-host}} which is available since QEMU 5.1.0, the syntax is: {{bc|1=-device qemu-xhci,id=xhci -device usb-host,bus=xhci.0,hostdevice=/dev/bus/usb/003/007}}
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}
# Attach whatever is connected to a given USB bus and address, the syntax is:{{bc|1=-device usb-host,bus=''controller_id''.0,hostbus=''host_bus'',host_addr=''host_addr''}}Applied to the bus and the address in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus='''3''',hostaddr='''7'''}}


{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}
==== Standard I/O ====


=== USB redirection with SPICE ===
It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.


When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.
=== Sending keyboard presses to the virtual machine using the monitor console ===


We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:


{{bc|1=
(qemu) sendkey ctrl-alt-f2
-device ich9-usb-ehci1,id=usb \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3
}}


See [https://www.spice-space.org/usbredir.html SPICE/usbredir] for more information.
=== Creating and managing snapshots via the monitor console ===


Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}


{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.


=== Enabling KSM ===
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).


Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
=== Running the virtual machine in immutable mode ===


{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.


To enable KSM:
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:


  # echo 1 > /sys/kernel/mm/ksm/run
  (qemu) commit all


To make it permanent, use [[systemd#systemd-tmpfiles - temporary files|systemd's temporary files]]:
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.


{{hc|/etc/tmpfiles.d/ksm.conf|
=== Pause and power options via the monitor console ===
w /sys/kernel/mm/ksm/run - - - - 1
}}


If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/html/latest/admin-guide/mm/ksm.html for more information.
Some operations of a physical machine can be emulated by QEMU using some monitor commands:


{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.
 
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.
=== Multi-monitor support ===
* {{ic|stop}} will pause the virtual machine.
* {{ic|cont}} will resume a virtual machine previously paused.


The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.
=== Taking screenshots of the virtual machine ===


The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:


=== Copy and paste ===
(qemu) screendump ''file.ppm''


One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.
== QEMU machine protocol ==
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.


=== Windows-specific notes ===
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].


QEMU can run any version of Windows from Windows 95 through Windows 10.
=== Start QMP ===


It is possible to run [[Windows PE]] in QEMU.
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:


==== Fast startup ====
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait


{{Note|An administrator account is required to change power settings.}}
Then one way to communicate with the QMP agent is to use [[netcat]]:


For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}


Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:


==== Remote Desktop Protocol ====
{"execute": "qmp_capabilities"}


If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:


  $ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389
  {"execute": "query-commands"}


Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:
=== Live merging of child image into parent image ===


$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:


=== Clone Linux system installed on physical equipment ===
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}


Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.


=== Chrooting into arm/arm64 environment from x86_64 ===
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:


Sometimes it is easier to work directly on a disk image instead of the real ARM based device. This can be achieved by mounting an SD card/storage containing the ''root'' partition and chrooting into it.
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}


Another use case for an ARM chroot is building ARM packages on an x86_64 machine - {{AUR|armutils-git}} can be used for that. Here, the chroot environment can be created from an image tarball from [https://archlinuxarm.org Arch Linux ARM] - see [https://nerdstuff.org/posts/2020/2020-003_simplest_way_to_create_an_arm_chroot/] for a detailed description of this approach.
Until such a command is issued, the ''commit'' operation remains active.
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.


Either way, from the chroot it should be possible to run ''pacman'' and install more packages, compile large libraries etc. Since the executables are for the ARM architecture, the translation to x86 needs to be performed by [[QEMU]].
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}


Install {{AUR|binfmt-qemu-static}} and {{AUR|qemu-user-static}} from the [[AUR]] on the x86_64 machine/host. ''binfmt-qemu-static'' will take care of registering the qemu binaries to binfmt service.
=== Live creation of a new snapshot ===


[[Restart]] {{ic|systemd-binfmt.service}}
To create a new snapshot out of a running image, run the command:


{{AUR|qemu-user-static}} is needed to allow the execution of compiled programs from other architectures. This is similar to what is provided by {{Pkg|qemu-arch-extra}}, but the "static" variant is required for chroot. Examples:
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}


qemu-arm-static path_to_sdcard/usr/bin/ls
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.
qemu-aarch64-static path_to_sdcard/usr/bin/ls


These two lines execute the {{ic|ls}} command compiled for 32-bit ARM and 64-bit ARM respectively. Note that this will not work without chrooting, because it will look for libraries not present in the host system.
== Tips and tricks ==


{{AUR|qemu-user-static}} allows automatically prefixing the ARM exectuable with {{ic|qemu-arm-static}} or {{ic|qemu-aarch64-static}}.
=== Improve virtual machine performance ===


Make sure that the ARM executable support is active:
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:


{{hc|$ ls /proc/sys/fs/binfmt_misc|
* Apply [[#Enabling KVM]] for full virtualization.
qemu-aarch64  qemu-arm   qemu-cris  qemu-microblaze  qemu-mipsel  qemu-ppc64     qemu-riscv64  qemu-sh4    qemu-sparc qemu-sparc64  status
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU rather than a more generic CPU.
qemu-alpha    qemu-armeb  qemu-m68k  qemu-mips       qemu-ppc   qemu-ppc64abi32  qemu-s390x   qemu-sh4eb  qemu-sparc32plus register
* Especially for Windows guests, enable [https://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}. See the [https://www.qemu.org/docs/master/system/i386/hyperv.html QEMU documentation] for more information and flags.
}}
* multiple cores can be assigned to the guest using the {{ic|-smp cores{{=}}x,threads{{=}}y,sockets{{=}}1,maxcpus{{=}}z}} option. The threads parameter is used to assign [https://www.tomshardware.com/reviews/simultaneous-multithreading-definition,5762.html SMT cores]. Leaving a physical core for QEMU, the hypervisor and the host system to operate unimpeded is highly beneficial.
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.
* If supported by drivers in the guest operating system, use virtio for network and/or block devices, see [[#Installing virtio drivers]].
* Use TAP devices instead of user-mode networking, see [[#Tap networking with QEMU]].
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.
* If you have a raw disk or partition, you may want to disable the cache: {{bc|1=$ qemu-system-x86_64 -drive file=/dev/''disk'',if=virtio,'''cache=none'''}}
* Use the native Linux AIO: {{bc|1=$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''}}
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time: {{bc|1=$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0}}


Each executable must be listed.
See https://www.linux-kvm.org/page/Tuning_KVM for more information.


If it is not active, [[reinstall]] {{AUR|binfmt-qemu-static}} and [[restart]] {{ic|systemd-binfmt.service}}.
=== Starting QEMU virtual machines on boot ===


Mount the SD card to {{ic|/mnt/scdard}} (the device name may be different).
==== With libvirt ====


# mkdir -p /mnt/sdcard
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".
# mount /dev/mmcblk0p2 /mnt/sdcard


Mount boot partition if needed (again, use the suitable device name):
==== With systemd service ====


# mount /dev/mmcblk0p1 /mnt/sdcard/boot
To run QEMU virtual machines on boot, you can use following systemd unit and config.


Finally ''chroot'' into the SD card root as described in [[Change root#Using chroot]]:
{{hc|/etc/systemd/system/qemu@.service|2=
[Unit]
Description=QEMU virtual machine


# chroot /mnt/sdcard /bin/bash
[Service]
Environment="haltcmd=kill -INT $MAINPID"
EnvironmentFile=/etc/conf.d/qemu.d/%i
ExecStart=/usr/bin/qemu-system-x86_64 -name %i -enable-kvm -m 512 -nographic $args
ExecStop=/usr/bin/bash -c ${haltcmd}
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'


Alternatively, you can use ''arch-chroot'' from {{Pkg|arch-install-scripts}}, as it will provide an easier way to get network support:
[Install]
WantedBy=multi-user.target
}}


# arch-chroot /mnt/sdcard /bin/bash
{{Note|This service will wait for the console port to be released, which means that the virtual machine has been shutdown, to graciously end.}}


You can also use ''systemd-nspawn'' to chroot into the ARM environment:
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|args}} and {{ic|haltcmd}} set. Example configs:


# systemd-nspawn -D /mnt/sdcard -M myARMMachine --bind-ro=/etc/resolv.conf
{{hc|/etc/conf.d/qemu.d/one|2=
args="-hda /dev/vg0/vm1 -serial telnet:localhost:7000,server,nowait,nodelay \
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"


{{ic|1=--bind-ro=/etc/resolv.conf}} is optional and gives a working network DNS inside the chroot
haltcmd="echo 'system_powerdown' {{!}} nc localhost 7100" # or netcat/ncat}}


== Troubleshooting ==
{{hc|/etc/conf.d/qemu.d/two|2=
args="-hda /srv/kvm/vm2 -serial telnet:localhost:7001,server,nowait,nodelay -vnc :1"


=== Mouse cursor is jittery or erratic ===
haltcmd="ssh powermanager@vm2 sudo poweroff"}}


If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:
The description of the variables is the following:


$ export SDL_VIDEO_X11_DGAMOUSE=0
* {{ic|args}} - QEMU command line arguments to be used.
* {{ic|haltcmd}} - Command to shut down a virtual machine safely. In the first example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the virtual machines are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. In the other example, SSH is used.


If this helps, you can add this to your {{ic|~/.bashrc}} file.
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.


=== No visible Cursor ===
=== Mouse integration ===


Add {{ic|1=-display default,show-cursor=on}} to QEMU's options to see a mouse cursor.
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:


If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet


Another option to try is {{ic|-usb -device usb-tablet}} as mentioned in [[#Mouse integration]]. This overrides the default PS/2 mouse emulation and synchronizes pointer location between host and guest as an added bonus.
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].


=== Two different mouse cursors are visible ===
=== Pass-through host USB device ===


Apply the tip [[#Mouse integration]].
It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the {{ic|lsusb}} command. For example:


=== Keyboard issues when using VNC ===
{{hc|$ lsusb|
...
Bus '''003''' Device '''007''': ID '''0781''':'''5406''' SanDisk Corp. Cruzer Micro U3
}}


When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.
The outputs in bold above will be useful to identify respectively the ''host_bus'' and ''host_addr'' or the ''vendor_id'' and ''product_id''.


=== Keyboard seems broken or the arrow keys do not work ===
In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 1.1  USB 2  USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device qemu-xhci,id=xhci}} respectively and then attach the physical device to it with the option {{ic|1=-device usb-host,..}}. We will consider that ''controller_id'' is either {{ic|ehci}} or {{ic|xhci}} for the rest of this section.


Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps/}}.
Then, there are two ways to connect to the USB of the host with qemu:


$ qemu-system-x86_64 -k ''keymap'' ''disk_image''
# Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is: {{bc|1=-device usb-host,bus=''controller_id''.0,vendorid=0x''vendor_id'',productid=0x''product_id''}}Applied to the device used in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x'''0781''',productid=0x'''5406'''}}One can also add the {{ic|1=...,port=''port_number''}} setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one wants to add multiple USB devices to the virtual machine. Another option is to use the new {{ic|hostdevice}} property of {{ic|usb-host}} which is available since QEMU 5.1.0, the syntax is: {{bc|1=-device qemu-xhci,id=xhci -device usb-host,hostdevice=/dev/bus/usb/003/007}}
# Attach whatever is connected to a given USB bus and address, the syntax is:{{bc|1=-device usb-host,bus=''controller_id''.0,hostbus=''host_bus'',host_addr=''host_addr''}}Applied to the bus and the address in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus='''3''',hostaddr='''7'''}}
See [https://www.qemu.org/docs/master/system/devices/usb.html QEMU/USB emulation] for more information.
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}


=== Could not read keymap file ===
=== USB redirection with SPICE ===


qemu-system-x86_64: -display vnc=0.0.0.0:0: could not read keymap file: 'en'
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.


is caused by an invalid ''keymap'' passed to the {{ic|-k}} argument. For example, {{ic|en}} is invalid, but {{ic|en-us}} is valid - see {{ic|/usr/share/qemu/keymaps/}}.
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:


=== Guest display stretches on window resize ===
{{bc|1=
 
-device ich9-usb-ehci1,id=usb \
To restore default window size, press {{ic|Ctrl+Alt+u}}.
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3
}}


=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===
See [https://www.spice-space.org/usbredir.html SPICE/usbredir] for more information.


If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{Pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).


ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}
failed to initialize KVM: Device or resource busy


that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.
==== Automatic USB forwarding with udev ====


=== libgfapi error message ===
Normally, forwarded devices must be available at the boot time of the virtual machine to be forwarded. If that device is disconnected, it will not be forwarded anymore.


The error message displayed at startup:
You can use [[udev rule]]s to automatically attach a device when it comes online. Create a {{ic|hostdev}} entry somewhere on disk. [[chown]] it to root to prevent other users modifying it.


Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory
{{hc|/usr/local/hostdev-mydevice.xml|2=
<hostdev mode='subsystem' type='usb'>
  <source>
    <vendor id='0x03f0'/>
    <product id='0x4217'/>
  </source>
</hostdev>
}}


[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.
Then create a ''udev'' rule which will attach/detach the device:


=== Kernel panic on LIVE-environments===
{{hc|/usr/lib/udev/rules.d/90-libvirt-mydevice|2=
ACTION=="add", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="03f0", \
    ENV{ID_MODEL_ID}=="4217", \
    RUN+="/usr/bin/virsh attach-device GUESTNAME /usr/local/hostdev-mydevice.xml"
ACTION=="remove", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="03f0", \
    ENV{ID_MODEL_ID}=="4217", \
    RUN+="/usr/bin/virsh detach-device GUESTNAME /usr/local/hostdev-mydevice.xml"
}}


If you start a live-environment (or better: booting a system) you may encounter this:
[https://rolandtapken.de/blog/2011-04/how-auto-hotplug-usb-devices-libvirt-vms-update-1 Source and further reading].


[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)
=== Enabling KSM ===


or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.


=== Windows 7 guest suffers low-quality sound ===
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}


Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.
To enable KSM:


=== Could not access KVM kernel module: Permission denied ===
# echo 1 > /sys/kernel/mm/ksm/run
 
To make it permanent, use [[systemd#systemd-tmpfiles - temporary files|systemd's temporary files]]:
 
{{hc|/etc/tmpfiles.d/ksm.conf|
w /sys/kernel/mm/ksm/run - - - - 1
}}


If you encounter the following error:
If KSM is running, and there are pages to be merged (i.e. at least two similar virtual machines are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://docs.kernel.org/admin-guide/mm/ksm.html for more information.


libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory:


Systemd 234 assigns a dynamic ID for the {{ic|kvm}} group (see {{Bug|54943}}). To avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line with {{ic|1=group = "78"}} to {{ic|1=group = "kvm"}}.
$ grep -r . /sys/kernel/mm/ksm/


=== "System Thread Exception Not Handled" when booting a Windows VM ===
}}


Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.
=== Multi-monitor support ===


=== Certain Windows games/applications crashing/causing a bluescreen ===
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.


Occasionally, applications running in the VM may crash unexpectedly, whereas they would run normally on a physical machine. If, while running {{ic|dmesg -wH}}, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.


{{hc|/etc/modprobe.d/kvm.conf|2=
=== Custom display resolution ===
...
options kvm ignore_msrs=1
...
}}


Cases where adding this option might help:
A custom display resolution can be set with {{ic|1=-device VGA,edid=on,xres=1280,yres=720}} (see [[wikipedia:Extended_Display_Identification_Data|EDID]] and [[wikipedia:Display_resolution|display resolution]]).


* GeForce Experience complaining about an unsupported CPU being present.
=== Copy and paste ===
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.


{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}
==== SPICE ====


=== Applications in the VM experience long delays or take a long time to start ===
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.


This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].
==== qemu-vdagent ====


Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.
QEMU provides its own implementation of the spice vdagent chardev called {{ic|qemu-vdagent}}. It interfaces with the spice-vdagent guest service and allows the guest and host share a clipboard.
 
To access this shared clipboard with QEMU's GTK display, you will need to compile QEMU [[Arch build system|from source]] with the {{ic|--enable-gtk-clipboard}} configure parameter. It is sufficient to replace the installed {{ic|qemu-ui-gtk}} package.
 
{{Note|
* Feature request {{Bug|79716}} submitted to enable the functionality in the official package.
* The shared clipboard in qemu-ui-gtk has been pushed back to experimental as it can [https://gitlab.com/qemu-project/qemu/-/issues/1150 freeze guests under certain circumstances]. A fix has been proposed to solve the issue upstream.
}}
 
Add the following QEMU command line arguments:
 
-device virtio-serial,packed=on,ioeventfd=on
-device virtserialport,name=com.redhat.spice.0,chardev=vdagent0
-chardev qemu-vdagent,id=vdagent0,name=vdagent,clipboard=on,mouse=off
 
These arguments are also valid if converted to [[Libvirt#QEMU command line arguments|libvirt form]].
 
{{Note|While the spicevmc chardev will start the spice-vdagent service of the guest automatically, the qemu-vdagent chardev may not.}}
 
On linux guests, you may [[start]] the {{ic|spice-vdagent.service}} [[user unit]] manually. On Windows guests, set the spice-agent startup type to automatic.
 
=== Windows-specific notes ===
 
QEMU can run any version of Windows from Windows 95 through Windows 11.
 
It is possible to run [[Windows PE]] in QEMU.
 
==== Fast startup ====
 
{{Note|An administrator account is required to change power settings.}}
 
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.
 
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.
 
==== Remote Desktop Protocol ====
 
If you use a MS Windows guest, you might want to use RDP to connect to your guest virtual machine. If you are using a VLAN or are not in the same network as the guest, use:
 
$ qemu-system-x86_64 -nographic -nic user,hostfwd=tcp::5555-:3389
 
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:
 
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan
 
=== Clone Linux system installed on physical equipment ===
 
Linux system installed on physical equipment can be cloned for running on a QEMU virtual machine. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]
 
=== Chrooting into arm/arm64 environment from x86_64 ===
 
Sometimes it is easier to work directly on a disk image instead of the real ARM based device. This can be achieved by mounting an SD card/storage containing the ''root'' partition and chrooting into it.
 
Another use case for an ARM chroot is building ARM packages on an x86_64 machine. Here, the chroot environment can be created from an image tarball from [https://archlinuxarm.org Arch Linux ARM] - see [https://nerdstuff.org/posts/2020/2020-003_simplest_way_to_create_an_arm_chroot/] for a detailed description of this approach.
 
Either way, from the chroot it should be possible to run ''pacman'' and install more packages, compile large libraries etc. Since the executables are for the ARM architecture, the translation to x86 needs to be performed by [[QEMU]].
 
Install {{Pkg|qemu-user-static}} on the x86_64 machine/host, and {{Pkg|qemu-user-static-binfmt}} to register the qemu binaries to binfmt service.
 
''qemu-user-static'' is used to allow the execution of compiled programs from other architectures. This is similar to what is provided by {{Pkg|qemu-emulators-full}}, but the "static" variant is required for chroot. Examples:
 
qemu-arm-static path_to_sdcard/usr/bin/ls
qemu-aarch64-static path_to_sdcard/usr/bin/ls
 
These two lines execute the {{ic|ls}} command compiled for 32-bit ARM and 64-bit ARM respectively. Note that this will not work without chrooting, because it will look for libraries not present in the host system.
 
{{Pkg|qemu-user-static}} allows automatically prefixing the ARM exectuable with {{ic|qemu-arm-static}} or {{ic|qemu-aarch64-static}}.
 
Make sure that the ARM executable support is active:
 
{{hc|$ ls /proc/sys/fs/binfmt_misc|
qemu-aarch64  qemu-arm   qemu-cris  qemu-microblaze  qemu-mipsel  qemu-ppc64     qemu-riscv64  qemu-sh4    qemu-sparc qemu-sparc64  status
qemu-alpha    qemu-armeb  qemu-m68k  qemu-mips       qemu-ppc   qemu-ppc64abi32  qemu-s390x   qemu-sh4eb  qemu-sparc32plus register
}}
 
Each executable must be listed.
 
If it is not active, [[restart]] {{ic|systemd-binfmt.service}}.
 
Mount the SD card to {{ic|/mnt/sdcard}} (the device name may be different).
 
# mount --mkdir /dev/mmcblk0p2 /mnt/sdcard
 
Mount boot partition if needed (again, use the suitable device name):
 
# mount /dev/mmcblk0p1 /mnt/sdcard/boot
 
Finally ''chroot'' into the SD card root as described in [[Change root#Using chroot]]:
 
# chroot /mnt/sdcard /bin/bash
 
Alternatively, you can use ''arch-chroot'' from {{Pkg|arch-install-scripts}}, as it will provide an easier way to get network support:
 
# arch-chroot /mnt/sdcard /bin/bash
 
You can also use [[systemd-nspawn]] to chroot into the ARM environment:
 
# systemd-nspawn -D /mnt/sdcard -M myARMMachine --bind-ro=/etc/resolv.conf
 
{{ic|1=--bind-ro=/etc/resolv.conf}} is optional and gives a working network DNS inside the chroot
 
==== sudo in chroot ====
 
If you install [[sudo]] in the chroot and receive the following error when trying to use it:
 
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
 
then you may need to modify the binfmt flags, for example for {{ic|aarch64}}:
 
# cp /usr/lib/binfmt.d/qemu-aarch64-static.conf /etc/binfmt.d/
# vi /etc/binfmt.d/qemu-aarch64-static.conf
 
and add a {{ic|C}} at the end of this file:
 
:qemu-aarch64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-aarch64-static:FPC
 
Then [[restart]] {{ic|systemd-binfmt.service}} and check that the changes have taken effect (note the {{ic|C}} on the {{ic|flags}} line):
 
{{hc|# cat /proc/sys/fs/binfmt_misc/qemu-aarch64|
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: POCF
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff
}}
 
See the "flags" section of the [https://docs.kernel.org/admin-guide/binfmt-misc.html kernel binfmt documentation] for more information.
 
=== Not grabbing mouse input ===
 
{{Style|It is not explained what the option actually does. Is it causing or avoiding the side effect?}}
 
Tablet mode has side effect of not grabbing mouse input in QEMU window:
 
-usb -device usb-tablet
 
It works with several {{ic|-vga}} backends one of which is virtio.
 
== Troubleshooting ==
 
{{Merge|QEMU/Troubleshooting|This section is long enough to be split into a dedicated subpage.}}
 
=== Mouse cursor is jittery or erratic ===
 
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:
 
$ export SDL_VIDEO_X11_DGAMOUSE=0
 
If this helps, you can add this to your {{ic|~/.bashrc}} file.
 
=== No visible Cursor ===
 
Add {{ic|1=-display default,show-cursor=on}} to QEMU's options to see a mouse cursor.
 
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.
 
Another option to try is {{ic|-usb -device usb-tablet}} as mentioned in [[#Mouse integration]]. This overrides the default PS/2 mouse emulation and synchronizes pointer location between host and guest as an added bonus.
 
=== Two different mouse cursors are visible ===
 
Apply the tip [[#Mouse integration]].
 
=== Keyboard issues when using VNC ===
 
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [https://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.
 
=== Keyboard seems broken or the arrow keys do not work ===
 
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps/}}.
 
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''
 
=== Could not read keymap file ===
 
qemu-system-x86_64: -display vnc=0.0.0.0:0: could not read keymap file: 'en'
 
is caused by an invalid ''keymap'' passed to the {{ic|-k}} argument. For example, {{ic|en}} is invalid, but {{ic|en-us}} is valid - see {{ic|/usr/share/qemu/keymaps/}}.
 
=== Guest display stretches on window resize ===
 
To restore default window size, press {{ic|Ctrl+Alt+u}}.
 
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===
 
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:
 
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
failed to initialize KVM: Device or resource busy
 
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.
 
=== libgfapi error message ===
 
The error message displayed at startup:
 
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory
 
[[Install]] {{Pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.
 
=== Kernel panic on LIVE-environments ===
 
If you start a live-environment (or better: booting a system) you may encounter this:
 
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)
 
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).
Try starting the virtual machine with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.
 
=== Windows 7 guest suffers low-quality sound ===
 
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.
 
=== Could not access KVM kernel module: Permission denied ===
 
If you encounter the following error:
 
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied
 
Systemd 234 assigns a dynamic ID for the {{ic|kvm}} group (see {{Bug|54943}}). To avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line with {{ic|1=group = "78"}} to {{ic|1=group = "kvm"}}.
 
=== "System Thread Exception Not Handled" when booting a Windows virtual machine ===
 
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.
 
=== Certain Windows games/applications crashing/causing a bluescreen ===
 
Occasionally, applications running in the virtual machine may crash unexpectedly, whereas they would run normally on a physical machine. If, while running {{ic|dmesg -wH}} as root, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.
 
{{hc|/etc/modprobe.d/kvm.conf|2=
...
options kvm ignore_msrs=1
...
}}
 
Cases where adding this option might help:
 
* GeForce Experience complaining about an unsupported CPU being present.
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.
 
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the virtual machine or other virtual machine.}}
 
=== Applications in the virtual machine experience long delays or take a long time to start ===
 
{{Out of date|No longer true since kernel 5.6}}
 
This may be caused by insufficient available entropy in the virtual machine. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the virtual machine, or by installing an entropy generating daemon such as [[Haveged]].
 
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.


=== High interrupt latency and microstuttering ===
=== High interrupt latency and microstuttering ===


This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.
This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.
 
 
* One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores.  
* One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores.  
* Another possible cause is PS/2 inputs. Switch from PS/2 to Virtio inputs, see [[PCI passthrough via OVMF#Passing keyboard/mouse via Evdev]].
* Another possible cause is PS/2 inputs. Switch from PS/2 to Virtio inputs, see [[PCI passthrough via OVMF#Passing keyboard/mouse via Evdev]].
 
 
=== QXL video causes low resolution ===
=== QXL video causes low resolution ===
 
 
QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [https://bugs.launchpad.net/qemu/+bug/1843151] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.
QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [https://bugs.launchpad.net/qemu/+bug/1843151] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.
 
 
As a workaround, create your device in this form:
As a workaround, create your device in this form:
 
 
  -device qxl-vga,max_outputs=1...
  -device qxl-vga,max_outputs=1...
 
 
=== Hang during VM initramfs ===
=== Virtual machine not booting when using a Secure Boot enabled OVMF ===
 
 
Linux 5.2.11 introduced a KVM regression where under some circumstances a VM may permanently hang during the early boot phase, when the initramfs is being loaded or ran.  [https://www.spinics.net/lists/kvm/msg195171.html]  Linux 5.3 fixed the regression.  The host shows qemu using 100% CPU * number of virtual CPUs.  Reported case is with a host using hyperthreading, and a VM being given more than host's {{ic|nproc}}/2 virtual CPUs. It is unknown what exact circumstances trigger one of the threads to delete a memory region to cause this. The workarounds are:
{{ic|OVMF_CODE.secboot.4m.fd}} and {{ic|OVMF_CODE.secboot.fd}} files from {{Pkg|edk2-ovmf}} are built with [[Wikipedia:System Management Mode|SMM]] support. If S3 support is not disabled in the virtual machine, then the virtual machine might not boot at all.
 
 
* Upgrade to Linux 5.3.
Add the {{ic|1=-global ICH9-LPC.disable_s3=1}} option to the ''qemu'' command.
* Downgrade to Linux 5.2.10
 
* Until fixed, try giving the VM no more than the host's {{ic|nproc}}/2 virtual CPUs
See {{Bug|59465}} and https://github.com/tianocore/edk2/blob/master/OvmfPkg/README for more details and the required options to use Secure Boot in QEMU.
* Custom compile linux, reverting commit 2ad350fb4c (note this re-introduces a regression triggered when removing a memslot)
 
 
=== Virtual machine not booting into Arch ISO ===
=== VM does not boot when using a Secure Boot enabled OVMF ===
 
 
When trying to boot the virtual machine for the first time from an Arch ISO image, the boot process hangs. Adding {{ic|1=console=ttyS0}} to kernel boot options by pressing {{ic|e}} in the boot menu you will get more boot messages and the following error:
{{ic|/usr/share/edk2-ovmf/x64/OVMF_CODE.secboot.fd}} from {{Pkg|edk2-ovmf}} is built with [[Wikipedia:System Management Mode|SMM]] support. If S3 support is not disabled in the VM, then the VM might not boot at all.
 
 
:: Mounting '/dev/disk/by-label/ARCH_202204' to '/run/archiso/bootmnt'
Add the {{ic|1=-global ICH9-LPC.disable_s3=1}} option to the ''qemu'' command.
Waiting 30 seconds for device /dev/disk/by-label/ARCH_202204 ...
 
ERROR: '/dev/disk/by-label/ARCH_202204' device did not show up after 30 seconds...
See {{Bug|59465}} and https://github.com/tianocore/edk2/blob/master/OvmfPkg/README for more details and the required options to use Secure Boot in QEMU.
    Falling back to interactive prompt
 
    You can try to fix the problem manually, log out when you are finished
=== Guest CPU interrupts are not firing ===
sh: can't access tty; job control turned off
 
The error message does not give a good clue as to what the real issue is. The problem is with the default 128MB of RAM that QEMU allocates to the virtual machine. Increasing the limit to 1024MB with {{ic|-m 1024}} solves the issue and lets the system boot. You can continue installing Arch Linux as usual after that. Once the installation is complete, the memory allocation for the virtual machine can be decreased. The need for 1024MB is due to RAM disk requirements and size of the installation media. See [https://lists.archlinux.org/archives/list/arch-releng@lists.archlinux.org/message/D5HSGOFTPGYI6IZUEB3ZNAX4D3F3ID37/ this message on the arch-releng mailing list] and [https://bbs.archlinux.org/viewtopic.php?id=204023 this forum thread].
 
=== Guest CPU interrupts are not firing ===
 
If you are writing your own operating system by following the [https://wiki.osdev.org/ OSDev wiki], or are simply getting stepping through the guest architecture assembly code using QEMU's {{ic|gdb}} interface using the {{ic|-s}} flag, it is useful to know that many emulators, QEMU included, usually implement some CPU interrupts leaving many hardware interrupts unimplemented. One way to know if your code is firing an interrupt, is by using:
 
-d int
 
to enable showing interrupts/exceptions on stdout.
 
To see what other guest debugging features QEMU has to offer, see:
 
qemu-system-x86_64 -d help
 
or replace {{ic|x86_64}} for your chosen guest architecture.
 
=== KDE with sddm does not start spice-vdagent at login automatically ===


If you are writing your own operating system by following the [https://wiki.osdev.org/ OSDev wiki], or are simply getting stepping through the guest architecture assembly code using QEMU's {{ic|gdb}} interface using the {{ic|-s}} flag, it's useful to know that many emulators, QEMU included, usually implement some CPU interrupts leaving many hardware interrupts unimplemented. One way to know if your code if firing an interrupt, is by using:
Remove or comment out {{ic|X-GNOME-Autostart-Phase{{=}}WindowManager}} from {{ic|/etc/xdg/autostart/spice-vdagent.desktop}}. [https://github.com/systemd/systemd/issues/18791]


-d int
=== Error starting domain: Requested operation is not valid: network 'default' is not active ===


to enable showing interrupts/exceptions on stdout.
If for any reason the default network is deactivated, you will not be able to start any guest virtual machines which are configured to use the network. Your first attempt can be simply trying to start the network with virsh.


To see what other guest debugging features QEMU has to offer, see:
# virsh net-start default


qemu-system-x86_64 -d help
For additional troubleshooting steps, see [https://www.xmodulo.com/network-default-is-not-active.html].
 
or replace {{ic|x86_64}} for your chosen guest architecture.


== See also ==
== See also ==


* [http://qemu.org Official QEMU website]
* [https://qemu.org Official QEMU website]
* [http://www.linux-kvm.org Official KVM website]
* [https://www.linux-kvm.org Official KVM website]
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]
* [https://qemu.weilnetz.de/doc/6.0/ QEMU Emulator User Documentation]
* [[Wikibooks:QEMU|QEMU Wikibook]]
* [[Wikibooks:QEMU|QEMU Wikibook]]
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]
* [https://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]
* [http://qemu.weilnetz.de/ QEMU on Windows]
* [https://qemu.weilnetz.de/ QEMU on Windows]
* [[wikipedia:Qemu|Wikipedia]]
* [[wikipedia:Qemu|Wikipedia]]
* [[debian:QEMU|Debian Wiki - QEMU]]
* [[debian:QEMU|Debian Wiki - QEMU]]
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]{{Dead link|2022|09|22|status=404}}
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]{{Dead link|2022|09|22|status=404}}
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/part-virt-qemu.html Managing Virtual Machines with QEMU - openSUSE documentation]
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/part-virt-qemu.html Managing Virtual Machines with QEMU - openSUSE documentation]
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]

Latest revision as of 20:57, 21 March 2024

According to the QEMU about page, "QEMU is a generic and open source machine emulator and virtualizer."

When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.

QEMU can use other hypervisors like Xen or KVM to use CPU extensions (HVM) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.

Installation

Install the qemu-full package (or qemu-base for the version without GUI and qemu-desktop for the version with only x86 emulation by default) and below optional packages for your needs:

Alternatively, qemu-user-static exists as a usermode and static variant.

QEMU variants

QEMU is offered in several variants suited for different use cases.

As a first classification, QEMU is offered in full-system and usermode emulation modes:

Full-system emulation
In this mode, QEMU emulates a full system, including one or several processors and various peripherals. It is more accurate but slower, and does not require the emulated OS to be Linux.
QEMU commands for full-system emulation are named qemu-system-target_architecture, e.g. qemu-system-x86_64 for emulating x86_64 CPUs, qemu-system-i386 for Intel 32-bit x86 CPUs, qemu-system-arm for ARM (32 bits), qemu-system-aarch64 for ARM64, etc.
If the target architecture matches the host CPU, this mode may still benefit from a significant speedup by using a hypervisor like KVM or Xen.
Usermode emulation
In this mode, QEMU is able to invoke a Linux executable compiled for a (potentially) different architecture by leveraging the host system resources. There may be compatibility issues, e.g. some features may not be implemented, dynamically linked executables will not work out of the box (see #Chrooting into arm/arm64 environment from x86_64 to address this) and only Linux is supported (although Wine may be used for running Windows executables).
QEMU commands for usermode emulation are named qemu-target_architecture, e.g. qemu-x86_64 for emulating 64-bit CPUs.

QEMU is offered in dynamically-linked and statically-linked variants:

Dynamically-linked (default)
qemu-* commands depend on the host OS libraries, so executables are smaller.
Statically-linked
qemu-* commands can be copied to any Linux system with the same architecture.

In the case of Arch Linux, full-system emulation is offered as:

Non-headless (default)
This variant enables GUI features that require additional dependencies (like SDL or GTK).
Headless
This is a slimmer variant that does not require GUI (this is suitable e.g. for servers).

Note that headless and non-headless versions install commands with the same name (e.g. qemu-system-x86_64) and thus cannot be both installed at the same time.

Details on packages available in Arch Linux

  • The qemu-desktop package provides the x86_64 architecture emulators for full-system emulation (qemu-system-x86_64). The qemu-emulators-full package provides the x86_64 usermode variant (qemu-x86_64) and also for the rest of supported architectures it includes both full-system and usermode variants (e.g. qemu-system-arm and qemu-arm).
  • The headless versions of these packages (only applicable to full-system emulation) are qemu-base (x86_64-only) and qemu-emulators-full (rest of architectures).
  • Full-system emulation can be expanded with some QEMU modules present in separate packages: qemu-block-gluster, qemu-block-iscsi and qemu-guest-agent.
  • qemu-user-static provides a usermode and static variant for all target architectures supported by QEMU. The installed QEMU commands are named qemu-target_architecture-static, for example, qemu-x86_64-static for intel 64-bit CPUs.
Note: At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.

Graphical front-ends for QEMU

Unlike other virtualization programs such as VirtualBox and VMware, QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).

Libvirt provides a convenient way to manage QEMU virtual machines. See list of libvirt clients for available front-ends.

Creating new virtualized system

Creating a hard disk image

The factual accuracy of this article or section is disputed.

Reason: If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is explicitly told to preallocate. See qemu-img(1) § NOTES. (Discuss in Talk:QEMU)
Tip: See Wikibooks:QEMU/Images for more information on QEMU images.

To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.

A hard disk image can be raw, so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.

Alternatively, the hard disk image can be in a format such as qcow2 which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see #Creating and managing snapshots via the monitor console for details). However, using this format instead of raw will likely affect performance.

QEMU provides the qemu-img command to create hard disk images. For example to create a 4 GiB image in the raw format:

$ qemu-img create -f raw image_file 4G

You may use -f qcow2 to create a qcow2 disk instead.

Note: You can also simply create a raw image by creating a file of the needed size using dd or fallocate.
Warning: If you store the hard disk images on a Btrfs file system, you should consider disabling Copy-on-Write for the directory before creating any images. Can be specified in option nocow for qcow2 format when creating image:
$ qemu-img create -f qcow2 image_file -o nocow=on 4G

Overlay storage images

You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.

To create an overlay image, issue a command like:

$ qemu-img create -o backing_file=img1.raw,backing_fmt=raw -f qcow2 img1.cow

After that you can run your QEMU virtual machine as usual (see #Running virtualized system):

$ qemu-system-x86_64 img1.cow

The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.

When the path to the backing image changes, repair is required.

Warning: The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.

Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:

$ qemu-img rebase -b /new/img1.raw /new/img1.cow

At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:

$ qemu-img rebase -u -b /new/img1.raw /new/img1.cow

Resizing an image

Warning: Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.

The qemu-img executable has the resize option, which enables easy resizing of a hard drive image. It works for both raw and qcow2. For example, to increase image space by 10 GiB, run:

$ qemu-img resize disk_image +10G

After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space.

Shrinking an image

When shrinking a disk image, you must first reduce the allocated file systems and partition sizes using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly. For a Windows guest, this can be performed from the "create and format hard disk partitions" control panel.

Warning: Proceeding to shrink the disk image without reducing the guest partition sizes will result in data loss.

Then, to decrease image space by 10 GiB, run:

$ qemu-img resize --shrink disk_image -10G

Converting an image

You can convert an image to other formats using qemu-img convert. This example shows how to convert a raw image to qcow2:

$ qemu-img convert -f raw -O qcow2 input.img output.qcow2

This will not remove the original input file.

Preparing the installation media

To install an operating system into your disk image, you need the installation medium (e.g. optical disc, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.

Tip: If using an optical disc, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named /dev/cdrom, you can dump it to a file with the command:
$ dd if=/dev/cdrom of=cd_image.iso bs=4k

Installing the operating system

This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.

For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:

$ qemu-system-x86_64 -cdrom iso_image -boot order=d -drive file=disk_image,format=raw

See qemu(1) for more information about loading other media types (such as floppy, disk images or physical drives) and #Running virtualized system for other useful options.

After the operating system has finished installing, the QEMU image can be booted directly (see #Running virtualized system).

Note: By default only 128 MiB of memory is assigned to the machine. The amount of memory can be adjusted with the -m switch, for example -m 512M or -m 2G.
Tip:
  • Instead of specifying -boot order=x, some users may feel more comfortable using a boot menu: -boot menu=on, at least during configuration and experimentation.
  • When running QEMU in headless mode, it starts a local VNC server on port 5900 per default. You can use TigerVNC to connect to the guest OS: vncviewer :5900
  • If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press Ctrl+Alt+2 in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type info block to see the block devices, and use the change command to swap out a device. Press Ctrl+Alt+1 to go back to the virtual machine.

Running virtualized system

qemu-system-* binaries (for example qemu-system-i386 or qemu-system-x86_64, depending on guest's architecture) are used to run the virtualized guest. The usage is:

$ qemu-system-x86_64 options disk_image

Options are the same for all qemu-system-* binaries, see qemu(1) for documentation of all options.

Usually, if an option has many possible values, you can use

$ qemu-system-x86_64 option help

to list all possible values. If it supports properties, you can use

$ qemu-system-x86_64 option value,help

to list all available properties.

For example:

$ qemu-system-x86_64 -machine help
$ qemu-system-x86_64 -machine q35,help
$ qemu-system-x86_64 -device help
$ qemu-system-x86_64 -device qxl,help

You can use these methods and the qemu(1) documentation to understand the options used in follow sections.

By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press Ctrl+Alt+g.

Warning: QEMU should never be run as root. If you must launch it in a script as root, you should use the -runas option to make QEMU drop root privileges.

Enabling KVM

KVM (Kernel-based Virtual Machine) full virtualization must be supported by your Linux kernel and your hardware, and necessary kernel modules must be loaded. See KVM for more information.

To start QEMU in KVM mode, append -accel kvm to the additional start options. To check if KVM is enabled for a running virtual machine, enter the #QEMU monitor and type info kvm.

Note:
  • The argument accel=kvm of the -machine option is equivalent to the -enable-kvm or the -accel kvm option.
  • CPU model host requires KVM.
  • If you start your virtual machine with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.
  • KVM needs to be enabled in order to start Windows 7 or Windows 8 properly without a blue screen.

Enabling IOMMU (Intel VT-d/AMD-Vi) support

First enable IOMMU, see PCI passthrough via OVMF#Setting up IOMMU.

Add -device intel-iommu to create the IOMMU device:

$ qemu-system-x86_64 -enable-kvm -machine q35 -device intel-iommu -cpu host ..
Note: On Intel CPU based systems creating an IOMMU device in a QEMU guest with -device intel-iommu will disable PCI passthrough with an error like:
Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation
While adding the kernel parameter intel_iommu=on is still needed for remapping IO (e.g. PCI passthrough with vfio-pci), -device intel-iommu should not be set if PCI passthrough is required.

Booting in UEFI mode

The default firmware used by QEMU is SeaBIOS, which is a Legacy BIOS implementation. QEMU uses /usr/share/qemu/bios-256k.bin (provided by the seabios package) as a default read-only (ROM) image. You can use the -bios argument to select another firmware file. However, UEFI requires writable memory to work properly, so you need to emulate PC System Flash instead.

OVMF is a TianoCore project to enable UEFI support for Virtual Machines. It can be installed with the edk2-ovmf package.

There are two ways to use OVMF as a firmware. The first is to copy /usr/share/edk2/x64/OVMF.4m.fd, make it writable and use as a pflash drive:

-drive if=pflash,format=raw,file=/copy/of/OVMF.4m.fd

All changes to the UEFI settings will be saved directly to this file.

Another and more preferable way is to split OVMF into two files. The first one will be read-only and store the firmware executable, and the second one will be used as a writable variable store. The advantage is that you can use the firmware file directly without copying, so it will be updated automatically by pacman.

Use /usr/share/edk2/x64/OVMF_CODE.4m.fd as a first read-only pflash drive. Copy /usr/share/edk2/x64/OVMF_VARS.4m.fd, make it writable and use as a second writable pflash drive:

-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2/x64/OVMF_CODE.4m.fd \
-drive if=pflash,format=raw,file=/copy/of/OVMF_VARS.4m.fd

If secure boot is wanted, use q35 machine type and replace /usr/share/edk2/x64/OVMF_CODE.4m.fd with /usr/share/edk2/x64/OVMF_CODE.secboot.4m.fd.

Trusted Platform Module emulation

QEMU can emulate Trusted Platform Module, which is required by some systems such as Windows 11 (which requires TPM 2.0).

Install the swtpm package, which provides a software TPM implementation. Create some directory for storing TPM data (/path/to/mytpm will be used as an example). Run this command to start the emulator:

$ swtpm socket --tpm2 --tpmstate dir=/path/to/mytpm --ctrl type=unixio,path=/path/to/mytpm/swtpm-sock

/path/to/mytpm/swtpm-sock will be created by swtpm: this is a UNIX socket to which QEMU will connect. You can put it in any directory.

By default, swtpm starts a TPM version 1.2 emulator. The --tpm2 option enables TPM 2.0 emulation.

Finally, add the following options to QEMU:

-chardev socket,id=chrtpm,path=/path/to/mytpm/swtpm-sock \
-tpmdev emulator,id=tpm0,chardev=chrtpm \
-device tpm-tis,tpmdev=tpm0

and TPM will be available inside the virtual machine. After shutting down the virtual machine, swtpm will be automatically terminated.

See the QEMU documentation for more information.

If guest OS still doesn't recognize the TPM device, try to adjust CPU Models and Topology options. It might cause problem.

Sharing data between host and guest

Network

Data can be shared between the host and guest OS using any network protocol that can transfer files, such as NFS, SMB, NBD, HTTP, FTP, or SSH, provided that you have set up the network appropriately and enabled the appropriate services.

The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via SMB or NFS, or you can access the host's HTTP server, etc. It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see #Tap networking with QEMU).

QEMU's port forwarding

Note: QEMU's port forwarding is IPv4-only. IPv6 port forwarding is not implemented and the last patches were proposed in 2018.[1]

QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to an SSH server running on the guest.

For example, to bind port 60022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:

$ qemu-system-x86_64 disk_image -nic user,hostfwd=tcp::60022-:22

Make sure the sshd is running on the guest and connect with:

$ ssh guest-user@127.0.0.1 -p 60022

You can use SSHFS to mount the guest's file system at the host for shared read and write access.

To forward several ports, you just repeat the hostfwd in the -nic argument, e.g. for VNC's port:

$ qemu-system-x86_64 disk_image -nic user,hostfwd=tcp::60022-:22,hostfwd=tcp::5900-:5900

QEMU's built-in SMB server

QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up Samba on the host with an automatically generated smb.conf file located in /tmp/qemu-smb.random_string and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal Samba service on the host, which the guest can also access if you have set up shares on it.

Only a single directory can be set as shared with the option smb=, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.

Samba must be installed on the host. To enable this feature, start QEMU with a command like:

$ qemu-system-x86_64 -nic user,id=nic0,smb=shared_dir_path disk_image

where shared_dir_path is a directory that you want to share between the guest and host.

Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to \\10.0.2.4\qemu.

Note:
  • If you are using sharing options multiple times like -net user,smb=shared_dir_path1 -net user,smb=shared_dir_path2 or -net user,smb=shared_dir_path1,smb=shared_dir_path2 then it will share only the last defined one.
  • If you cannot access the shared folder and the guest system is Windows, check that the NetBIOS protocol is enabled[dead link 2023-05-06 ⓘ] and that a firewall does not block ports used by the NetBIOS protocol.
  • If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, enable guest access.
  • If you use #Tap networking with QEMU, use -device virtio-net,netdev=vmnic -netdev user,id=vmnic,smb=shared_dir_path to get SMB.

One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:

#!/bin/sh
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')
echo "[global]
allow insecure wide links = yes
[qemu]
follow symlinks = yes
wide links = yes
acl allow execute always = yes" >> "$conf"
# in case the change is not detected automatically:
smbcontrol --configfile="$conf" "$pid" reload-config

This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:

echo "[myshare]
path=another_path
read only=no
guest ok=yes
force user=username" >> $conf

This share will be available on the guest as \\10.0.2.4\myshare.

Using filesystem passthrough and VirtFS

See the QEMU documentation.

Host file sharing with virtiofsd

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

virtiofsd is shipped with QEMU package. Documentation is available online[dead link 2023-05-06 ⓘ] or /usr/share/doc/qemu/qemu/tools/virtiofsd.html on local file system with qemu-docs installed.

Add user that runs qemu to the 'kvm' user group, because it needs to access the virtiofsd socket. You might have to logout for change to take effect.

The factual accuracy of this article or section is disputed.

Reason: Running services as root is not secure. Also the process should be wrapped in a systemd service. (Discuss in Talk:QEMU)

Start as virtiofsd as root:

# /usr/lib/virtiofsd --socket-path=/var/run/qemu-vm-001.sock --shared-dir /tmp/vm-001 --cache always

where

  • /var/run/qemu-vm-001.sock is a socket file,
  • /tmp/vm-001 is a shared directory between the host and the guest virtual machine.

The created socket file has root only access permission. Give group kvm access to it with:

# chgrp kvm qemu-vm-001.sock; chmod g+rxw qemu-vm-001.sock

Add the following configuration options when starting the virtual machine:

-object memory-backend-memfd,id=mem,size=4G,share=on \
-numa node,memdev=mem \
-chardev socket,id=char0,path=/var/run/qemu-vm-001.sock \
-device vhost-user-fs-pci,chardev=char0,tag=myfs

where

This article or section needs expansion.

Reason: Explain the remaining options (or remove them if they are not necessary). (Discuss in Talk:QEMU)
  • size=4G shall match size specified with -m 4G option,
  • /var/run/qemu-vm-001.sock points to socket file started earlier,

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: The section should not be specific to Windows. (Discuss in Talk:QEMU)

Remember, that guest must be configured to enable sharing. For Windows there are instructions. Once configured, Windows will have the Z: drive mapped automatically with shared directory content.

Your Windows 10 guest system is properly configured if it has:

  • VirtioFSSService windows service,
  • WinFsp.Launcher windows service,
  • VirtIO FS Device driver under "System devices" in Windows "Device Manager".

If the above installed and Z: drive is still not listed, try repairing "Virtio-win-guest-tools" in Windows Add/Remove programs.

Mounting a partition of the guest on the host

It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.

The procedure to mount the drive on the host depends on the type of qemu image, raw or qcow2. We detail thereafter the steps to mount a drive in the two formats in #Mounting a partition from a raw image and #Mounting a partition from a qcow2 image. For the full documentation see Wikibooks:QEMU/Images#Mounting an image on the host.

Warning: You must unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.

Mounting a partition from a raw image

It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.

With manually specifying byte offset

One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:

# mount -o loop,offset=32256 disk_image mountpoint

The offset=32256 option is actually passed to the losetup program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the sizelimit option to specify the exact size of the partition, but this is usually unnecessary.

Depending on your disk image, the needed partition may not start at offset 32256. Run fdisk -l disk_image to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to mount.

With loop module autodetecting partitions

The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:

  • Get rid of all your loopback devices (unmount all mounted images, etc.).
  • Unload the loop kernel module, and load it with the max_part=15 parameter set. Additionally, the maximum number of loop devices can be controlled with the max_loop parameter.
Tip: You can put an entry in /etc/modprobe.d to load the loop module with max_part=15 every time, or you can put loop.max_part=15 on the kernel command-line, depending on whether you have the loop.ko module built into your kernel or not.

Set up your image as a loopback device:

# losetup -f -P disk_image

Then, if the device created was /dev/loop0, additional devices /dev/loop0pX will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:

# mount /dev/loop0p1 mountpoint

To mount the disk image with udisksctl, see Udisks#Mount loop devices.

With kpartx

kpartx from the multipath-tools package can read a partition table on a device and create a new device for each partition. For example:

# kpartx -a disk_image

This will setup the loopback device and create the necessary partition(s) device(s) in /dev/mapper/.

Mounting a partition from a qcow2 image

We will use qemu-nbd, which lets use the NBD (network block device) protocol to share the disk image.

First, we need the nbd module loaded:

# modprobe nbd max_part=16

Then, we can share the disk and create the device entries:

# qemu-nbd -c /dev/nbd0 /path/to/image.qcow2

Discover the partitions:

# partprobe /dev/nbd0

fdisk can be used to get information regarding the different partitions in nbd0:

# fdisk -l /dev/nbd0
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xa6a4d542

Device      Boot   Start      End  Sectors  Size Id Type
/dev/nbd0p1 *       2048  1026047  1024000  500M  7 HPFS/NTFS/exFAT
/dev/nbd0p2      1026048 52877311 51851264 24.7G  7 HPFS/NTFS/exFAT

Then mount any partition of the drive image, for example the partition 2:

# mount /dev/nbd0p2 mountpoint

After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:

# umount mountpoint
# qemu-nbd -d /dev/nbd0

Using any real partition as the single primary partition of a hard disk image

Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.

In Arch Linux, device files for raw partitions are, by default, owned by root and the disk group. If you would like to have a non-root user be able to read and write to a raw partition, you must either change the owner of the partition's device file to that user, add that user to the disk group, or use ACL for more fine-grained access control.

Warning:
  • Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.
  • You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.

After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.

However, things are a little more complicated if you want to have the entire virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a boot loader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by: #Specifying kernel and initrd manually, #Simulating a virtual disk with MBR, #Using the device-mapper, #Using a linear RAID or #Using a Network Block Device.

Specifying kernel and initrd manually

QEMU supports loading Linux kernels and init ramdisks directly, thereby circumventing boot loaders such as GRUB. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:

Note: In this example, it is the host's images that are being used, not the guest's. If you wish to use the guest's images, either mount /dev/sda3 read-only (to protect the file system from the host) and specify the /full/path/to/images or use some kexec hackery in the guest to reload the guest's kernel (extends boot time).
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3

In the above example, the physical partition being used for the guest's root file system is /dev/sda3 on the host, but it shows up as /dev/sda on the guest.

You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.

When there are multiple kernel parameters to be passed to the -append option, they need to be quoted using single or double quotes. For example:

... -append 'root=/dev/sda1 console=ttyS0'

Simulating a virtual disk with MBR

A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a boot loader such as GRUB.

For the following, suppose you have a plain, unmounted /dev/hdaN partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes /dev/hdaN to the virtual machine.

A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk created by

$ VBoxManage internalcommands createrawvmdk -filename /path/to/file.vmdk -rawdisk /dev/hda

will be rejected by QEMU with the error message

Unsupported image type 'partitionedDevice'

Note that VBoxManage creates two files, file.vmdk and file-pt.vmdk, the latter being a copy of the MBR, to which the text file file.vmdk points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.

Using the device-mapper

A method that is similar to the use of a VMDK descriptor file uses the device-mapper to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:

$ dd if=/dev/zero of=/path/to/mbr count=2048

Here, a 1 MiB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:

# losetup --show -f /path/to/mbr
/dev/loop0

In this example, the resulting device is /dev/loop0. The device mapper is now used to join the MBR and the partition:

# echo "0 2048 linear /dev/loop0 0
2048 `blockdev --getsz /dev/hdaN` linear /dev/hdaN 0" | dmsetup create qemu

The resulting /dev/mapper/qemu is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in /path/to/mbr).

The following setup is an example where the position of /dev/hdaN on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:

# dd if=/dev/hda count=1 of=/path/to/mbr
# loop=`losetup --show -f /path/to/mbr`
# start=`blockdev --report /dev/hdaN | tail -1 | awk '{print $5}'`
# size=`blockdev --getsz /dev/hdaN`
# disksize=`blockdev --getsz /dev/hda`
# echo "0 1 linear $loop 0
1 $((start-1)) zero
$start $size linear /dev/hdaN 0
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu

The table provided as standard input to dmsetup has a similar format as the table in a VMDK descriptor file produced by VBoxManage and can alternatively be loaded from a file with dmsetup create qemu --table table_file. To the virtual machine, only /dev/hdaN is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for /dev/mapper/qemu with dmsetup table qemu (use udevadm info -rq name /sys/dev/block/major:minor to translate major:minor to the corresponding /dev/blockdevice name). Use dmsetup remove qemu and losetup -d $loop to delete the created devices.

A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from /dev/hdaN instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.

Using a linear RAID

This article or section is out of date.

Reason: CONFIG_MD_LINEAR Removal Linear RAID has been deprecated since 2021 and removed on Kernel Version 6.8. (Discuss in Talk:QEMU)

You can also do this using software RAID in linear mode (you need the linear.ko kernel driver) and a loopback device:

First, you create some small file to hold the MBR:

$ dd if=/dev/zero of=/path/to/mbr count=32

Here, a 16 KiB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:

# losetup -f /path/to/mbr

Let us assume the resulting device is /dev/loop0, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + /dev/hdaN disk image using software RAID:

# modprobe linear
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN

The resulting /dev/md0 is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of /dev/hdaN inside /dev/md0 (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using fdisk on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kibibyte-roundable offsets (such as 31.5 KiB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the host:

# fdisk /dev/md0

Press X to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.

Now, press R to return to the main menu.

Press P and check that the cylinder size is now 16k.

Now, create a single primary partition corresponding to /dev/hdaN. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.

Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:

$ qemu-system-x86_64 -hdc /dev/md0 [...]

You can, of course, safely set any boot loader on this disk image using QEMU, provided the original /dev/hdaN partition contains the necessary tools.

Using a Network Block Device

With Network Block Device, Linux can use a remote server as one of its block device. You may use nbd-server (from the nbd package) to create an MBR wrapper for QEMU.

Assuming you have already set up your MBR wrapper file like above, rename it to wrapper.img.0. Then create a symbolic link named wrapper.img.1 in the same directory, pointing to your partition. Then put the following script in the same directory:

#!/bin/sh
dir="$(realpath "$(dirname "$0")")"
cat >wrapper.conf <<EOF
[generic]
allowlist = true
listenaddr = 127.713705
port = 10809

[wrap]
exportname = $dir/wrapper.img
multifile = true
EOF

nbd-server \
    -C wrapper.conf \
    -p wrapper.pid \
    "$@"

The .0 and .1 suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:

qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap [...]

Using an entire physical disk device inside the virtual machine

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: Duplicates #Using any real partition as the single primary partition of a hard disk image, libvirt instructions do not belong to this page. (Discuss in Talk:QEMU)

You may have a second disk with a different OS (like Windows) on it and may want to gain the ability to also boot it inside a virtual machine. Since the disk access is raw, the disk will perform quite well inside the virtual machine.

Windows virtual machine boot prerequisites

Be sure to install the virtio drivers inside the OS on that disk before trying to boot it in the virtual machine. For Win 7 use version 0.1.173-4. Some singular drivers from newer virtio builds may be used on Win 7 but you will have to install them manually via device manager. For Win 10 you can use the latest virtio build.

Set up the windows disk interface drivers

You may get a 0x0000007B bluescreen when trying to boot the virtual machine. This means Windows can not access the drive during the early boot stage because the disk interface driver it would need for that is not loaded / is set to start manually.

The solution is to enable these drivers to start at boot.

In HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services, find the folders aliide, amdide, atapi, cmdide, iastor (may not exist), iastorV, intelide, LSI_SAS, msahci, pciide and viaide. Inside each of those, set all their "start" values to 0 in order to enable them at boot. If your drive is a PCIe NVMe drive, also enable that driver (should it exist).

Find the unique path of your disk

Run ls /dev/disk/by-id/: tere you pick out the ID of the drive you want to insert into the virtual machine, for example ata-TS512GMTS930L_C199211383. Now add that ID to /dev/disk/by-id/ so you get /dev/disk/by-id/ata-TS512GMTS930L_C199211383. That is the unique path to that disk.

Add the disk in QEMU CLI

In QEMU CLI that would probably be:

-drive file=/dev/disk/by-id/ata-TS512GMTS930L_C199211383,format=raw,media=disk

Just modify file= to be the unique path of your drive.

Add the disk in libvirt

In libvirt XML that translates to

$ virsh edit vmname
...
    <disk type="block" device="disk">
      <driver name="qemu" type="raw" cache="none" io="native"/>
      <source dev="/dev/disk/by-id/ata-TS512GMTS930L_C199211383"/>
      <target dev="sda" bus="sata"/>
      <address type="drive" controller="0" bus="0" target="0" unit="0"/>
    </disk>
...

Just modify "source dev" to be the unique path of your drive.

Add the disk in virt-manager

When creating a virtual machine, select "import existing drive" and just paste that unique path. If you already have the virtual machine, add a device, storage, then select or create custom storage. Now paste the unique path.

Networking

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: Network topologies (sections #Host-only networking, #Internal networking and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as #User-mode networking, #Tap networking with QEMU, #Networking with VDE2. (Discuss in Talk:QEMU)

The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.

In addition, networking performance can be improved by assigning virtual machines a virtio network device rather than the default emulation of an e1000 NIC. See #Installing virtio drivers for more information.

Link-level address caveat

By giving the -net nic argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address 52:54:00:12:34:56. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.

Make sure that each virtual machine has a unique link-level address, but it should always start with 52:54:. Use the following option, replace X with arbitrary hexadecimal digit:

$ qemu-system-x86_64 -net nic,macaddr=52:54:XX:XX:XX:XX -net vde disk_image

Generating unique link-level addresses can be done in several ways:

  • Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.
  • Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a macaddr variable:
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde disk_image
  • Use the following script qemu-mac-hasher.py to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.
qemu-mac-hasher.py
#!/usr/bin/env python
# usage: qemu-mac-hasher.py <VMName>

import sys
import zlib

crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8")))).replace("x", "")[-8:]
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))

In a script, you can use for example:

vm_name="VM Name"
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde disk_image

User-mode networking

By default, without any -netdev arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.

Note: ICMPv6 will not work, as support for it is not implemented: Slirp: external icmpv6 not supported yet. Pinging an IPv6 address will not work.

This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.

QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the -net user flag for more details.

However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.

Note: If the host system uses systemd-networkd, make sure to symlink the /etc/resolv.conf file as described in systemd-networkd#Required services and setup, otherwise the DNS lookup in the guest system will not work.
Tip:
  • To use the virtio driver with user-mode networking, the option is: -nic user,model=virtio-net-pci.
  • You can isolate user-mode networking from the host and the outside world by adding restrict=y, for example: -net user,restrict=y

Tap networking with QEMU

Tap devices are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.

QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.

Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as eth0. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.

Warning: If you bridge together tap device and some host interface, such as eth0, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the precautions you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use host-only networking and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.

As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:

-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no

But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:

-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on

See [2] for more information.

Host-only networking

If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. eth0) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called host-only networking by other virtualization software such as VirtualBox.

Tip:
  • If you want to set up IP masquerading, e.g. NAT for virtual machines, see the Internet sharing#Enable NAT page.
  • See Network bridge for information on creating bridge.
  • You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the 172.20.0.1/16 subnet with dnsmasq as the DHCP server:
# ip addr add 172.20.0.1/16 dev br0
# ip link set br0 up
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254

Internal networking

If you do not give the bridge an IP address and add an iptables rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called internal networking by other virtualization software such as VirtualBox. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.

By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:

# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

Bridged networking using qemu-bridge-helper

This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses /usr/lib/qemu/qemu-bridge-helper binary, which allows creating tap devices on an existing bridge.

Tip:

First, create a configuration file containing the names of all bridges to be used by QEMU:

/etc/qemu/bridge.conf
allow br0
allow br1
...

Make sure /etc/qemu/ has 755 permissions. QEMU issues and GNS3 issues may arise if this is not the case.

Now start the virtual machine; the most basic usage to run QEMU with the default network helper and default bridge br0:

$ qemu-system-x86_64 -nic bridge [...]

Using the bridge br1 and the virtio driver:

$ qemu-system-x86_64 -nic bridge,br=br1,model=virtio-net-pci [...]

Creating bridge manually

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: This section needs serious cleanup and may contain out-of-date information. (Discuss in Talk:QEMU)
Tip: Since QEMU 1.1, the network bridge helper can set tun/tap up for you without the need for additional scripting. See #Bridged networking using qemu-bridge-helper.

The following describes how to bridge a virtual machine to a host interface such as eth0, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.

We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.

  • Install bridge-utils, which provides brctl to manipulate bridges.
  • Enable IPv4 forwarding:
# sysctl -w net.ipv4.ip_forward=1

To make the change permanent, change net.ipv4.ip_forward = 0 to net.ipv4.ip_forward = 1 in /etc/sysctl.d/99-sysctl.conf.

  • Load the tun module and configure it to be loaded on boot. See Kernel modules for details.
  • Optionally create the bridge. See Bridge with netctl for details. Remember to name your bridge as br0, or change the scripts below to your bridge's name. In the run-qemu script below, br0 is set up if not listed, as it is assumed that by default the host is not accessing network via the bridge.
  • Create the script that QEMU uses to bring up the tap adapter with root:kvm 750 permissions:
/etc/qemu-ifup
#!/bin/sh

echo "Executing /etc/qemu-ifup"
echo "Bringing up $1 for bridged mode..."
sudo /usr/bin/ip link set $1 up promisc on
echo "Adding $1 to br0..."
sudo /usr/bin/brctl addif br0 $1
sleep 2
  • Create the script that QEMU uses to bring down the tap adapter in /etc/qemu-ifdown with root:kvm 750 permissions:
/etc/qemu-ifdown
#!/bin/sh

echo "Executing /etc/qemu-ifdown"
sudo /usr/bin/ip link set $1 down
sudo /usr/bin/brctl delif br0 $1
sudo /usr/bin/ip link delete dev $1
  • Use visudo to add the following to your sudoers file:
Cmnd_Alias      QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl
%kvm     ALL=NOPASSWD: QEMU
  • You launch QEMU using the following run-qemu script:
run-qemu
#!/bin/bash
: '
e.g. with img created via:
qemu-img create -f qcow2 example.img 90G
run-qemu -cdrom archlinux-x86_64.iso -boot order=d -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4
run-qemu -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4
'

nicbr0() {
    sudo ip link set dev $1 promisc on up &> /dev/null
    sudo ip addr flush dev $1 scope host &>/dev/null
    sudo ip addr flush dev $1 scope site &>/dev/null
    sudo ip addr flush dev $1 scope global &>/dev/null
    sudo ip link set dev $1 master br0 &> /dev/null
}
_nicbr0() {
    sudo ip link set $1 promisc off down &> /dev/null
    sudo ip link set dev $1 nomaster &> /dev/null
}

HASBR0="$( ip link show | grep br0 )"
if [ -z $HASBR0 ] ; then
    ROUTER="192.168.1.1"
    SUBNET="192.168.1."
    NIC=$(ip link show | grep en | grep 'state UP' | head -n 1 | cut -d":" -f 2 | xargs)
    IPADDR=$(ip addr show | grep -o "inet $SUBNET\([0-9]*\)" | cut -d ' ' -f2)
    sudo ip link add name br0 type bridge &> /dev/null
    sudo ip link set dev br0 up
    sudo ip addr add $IPADDR/24 brd + dev br0
    sudo ip route del default &> /dev/null
    sudo ip route add default via $ROUTER dev br0 onlink
    nicbr0 $NIC
    sudo iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
fi

USERID=$(whoami)
precreationg=$(ip tuntap list | cut -d: -f1 | sort)
sudo ip tuntap add user $USERID mode tap
postcreation=$(ip tuntap list | cut -d: -f1 | sort)
TAP=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))
nicbr0 $TAP

printf -v MACADDR "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))
qemu-system-x86_64 -net nic,macaddr=$MACADDR,model=virtio \
    -net tap,ifname=$TAP,script=no,downscript=no,vhost=on \
    $@

_nicbr0 $TAP
sudo ip link set dev $TAP down &> /dev/null
sudo ip tuntap del $TAP mode tap

if [ -z $HASBR0 ] ; then
    _nicbr0 $NIC
    sudo ip addr del dev br0 $IPADDR/24 &> /dev/null
    sudo ip link set dev br0 down
    sudo ip link delete br0 type bridge &> /dev/null
    sudo ip route del default &> /dev/null
    sudo ip link set dev $NIC up
    sudo ip route add default via $ROUTER dev $NIC onlink &> /dev/null
fi

Then to launch a virtual machine, do something like this

$ run-qemu -hda myvm.img -m 512
/etc/sysctl.d/10-disable-firewall-on-bridge.conf
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0

In order to apply the parameters described above on boot, you will also need to load the br-netfilter module on boot. Otherwise, the parameters will not exist when sysctl will try to modify them.

/etc/modules-load.d/br_netfilter.conf
br_netfilter

Run sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf to apply the changes immediately.

See the libvirt wiki and Fedora bug 512206. If you get errors by sysctl during boot about non-existing files, make the bridge module load at boot. See Kernel module#systemd.

Alternatively, you can configure iptables to allow all traffic to be forwarded across the bridge by adding a rule like this:

-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

Network sharing between physical device and a Tap device through iptables

This article or section is a candidate for merging with Internet_sharing.

Notes: Duplication, not specific to QEMU. (Discuss in Talk:QEMU)

Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.

See Network bridge#Wireless interface on a bridge as a reference.

One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.

See Internet sharing as a reference.

There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.

To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside /etc/sysctl.d:

net.ipv4.ip_forward = 1
net.ipv6.conf.default.forwarding = 1
net.ipv6.conf.all.forwarding = 1

The iptables rules can look like:

# Forwarding from/to outside
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT
# NAT/Masquerade (network address translation)
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE

The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:

INT=tap0
EXT_0=eth0
EXT_1=wlan0
EXT_2=tun0

The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.

The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.

Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.

Networking with VDE2

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: This section needs serious cleanup and may contain out-of-date information. (Discuss in Talk:QEMU)

What is VDE?

VDE stands for Virtual Distributed Ethernet. It started as an enhancement of uml_switch. It is a toolbox to manage virtual networks.

The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read the documentation of the project.

The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.

Basics

VDE support can be installed via the vde2 package.

In our config, we use tun/tap to create a virtual interface on my host. Load the tun module (see Kernel modules for details):

# modprobe tun

Now create the virtual switch:

# vde_switch -tap tap0 -daemon -mod 660 -group users

This line creates the switch, creates tap0, "plugs" it, and allows the users of the group users to use it.

The interface is plugged in but not configured yet. To configure it, run this command:

# ip addr add 192.168.100.254/24 dev tap0

Now, you just have to run KVM with these -net options as a normal user:

$ qemu-system-x86_64 -net nic -net vde -hda [...]

Configure networking for your guest as you would do in a physical network.

Tip: You might want to set up NAT on tap device to access the internet from the virtual machine. See Internet sharing#Enable NAT for more information.

Startup scripts

Example of main script starting VDE:

/etc/systemd/scripts/qemu-network-env
#!/bin/sh
# QEMU/VDE network environment preparation script

# The IP configuration for the tap device that will be used for
# the virtual machine network:

TAP_DEV=tap0
TAP_IP=192.168.100.254
TAP_MASK=24
TAP_NETWORK=192.168.100.0

# Host interface
NIC=eth0

case "$1" in
  start)
        echo -n "Starting VDE network for QEMU: "

        # If you want tun kernel module to be loaded by script uncomment here
	#modprobe tun 2>/dev/null
	## Wait for the module to be loaded
 	#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done

        # Start tap switch
        vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users

        # Bring tap interface up
        ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"
        ip link set "$TAP_DEV" up

        # Start IP Forwarding
        echo "1" > /proc/sys/net/ipv4/ip_forward
        iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE
        ;;
  stop)
        echo -n "Stopping VDE network for QEMU: "
        # Delete the NAT rules
        iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE

        # Bring tap interface down
        ip link set "$TAP_DEV" down

        # Kill VDE switch
        pgrep vde_switch | xargs kill -TERM
        ;;
  restart|reload)
        $0 stop
        sleep 1
        $0 start
        ;;
  *)
        echo "Usage: $0 {start|stop|restart|reload}"
        exit 1
esac
exit 0

Example of systemd service using the above script:

/etc/systemd/system/qemu-network-env.service
[Unit]
Description=Manage VDE Switch

[Service]
Type=oneshot
ExecStart=/etc/systemd/scripts/qemu-network-env start
ExecStop=/etc/systemd/scripts/qemu-network-env stop
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Change permissions for qemu-network-env to be executable.

You can start qemu-network-env.service as usual.

Alternative method

If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.

# vde_switch -daemon -mod 660 -group users
# slirpvde --dhcp --daemon

Then, to start the virtual machine with a connection to the network of the host:

$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde disk_image

VDE2 Bridge

Based on quickhowto: qemu networking using vde, tun/tap, and bridge graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.

Basics

Remember that you need tun module and bridge-utils package.

Create the vde2/tap device:

# vde_switch -tap tap0 -daemon -mod 660 -group users
# ip link set tap0 up

Create bridge:

# brctl addbr br0

Add devices:

# brctl addif br0 eth0
# brctl addif br0 tap0

And configure bridge interface:

# dhcpcd br0

Startup scripts

All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. eth0), this can be done with netctl using a custom Ethernet profile with:

/etc/netctl/ethernet-noip
Description='A more versatile static Ethernet connection'
Interface=eth0
Connection=ethernet
IP=no

The following custom systemd service can be used to create and activate a VDE2 tap interface for users in the users user group.

/etc/systemd/system/vde2@.service
[Unit]
Description=Network Connectivity for %i
Wants=network.target
Before=network.target

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users
ExecStart=/usr/bin/ip link set dev %i up
ExecStop=/usr/bin/ip addr flush dev %i
ExecStop=/usr/bin/ip link set dev %i down

[Install]
WantedBy=multi-user.target

And finally, you can create the bridge interface with netctl.

Shorthand configuration

If you are using QEMU with various networking options a lot, you probably have created a lot of -netdev and -device argument pairs, which gets quite repetitive. You can instead use the -nic argument to combine -netdev and -device together, so that, for example, these arguments:

-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net-pci,netdev=network0

become:

-nic tap,script=no,downscript=no,vhost=on,model=virtio-net-pci

Notice the lack of network IDs, and that the device was created with model=. The first half of the -nic parameters are -netdev parameters, whereas the second half (after model=) are related with the device. The same parameters (for example, smb=) are used. To completely disable the networking use -nic none.

See QEMU networking documentation for more information on parameters you can use.

Graphic card

QEMU can emulate a standard graphic card text mode using -display curses command line option. This allows to type text and see text output directly inside a text terminal. Alternatively, -nographic serves a similar purpose.

QEMU can emulate several types of VGA card. The card type is passed in the -vga type command line option and can be std, qxl, vmware, virtio, cirrus or none.

std

With -vga std you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.

qxl

QXL is a paravirtual graphics driver with 2D support. To use it, pass the -vga qxl option and install drivers in the guest. You may want to use #SPICE for improved graphical performance when using QXL.

On Linux guests, the qxl and bochs_drm kernel modules must be loaded in order to gain a decent performance.

Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, increase vga_memmb.

vmware

Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers xf86-video-vmware and xf86-input-vmmouse for Arch Linux guests.

virtio

virtio-vga / virtio-gpu is a paravirtual 3D graphics driver based on virgl. It's mature, currently supporting only Linux guests with mesa compiled with the option gallium-drivers=virgl.

To enable 3D acceleration on the guest system, select this vga with -device virtio-vga-gl and enable the OpenGL context in the display device with -display sdl,gl=on or -display gtk,gl=on for the SDL and GTK display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:

# dmesg | grep drm 
[drm] pci: virtio-vga detected
[drm] virgl 3d acceleration enabled

cirrus

The cirrus graphical adapter was the default before 2.2. It should not be used on modern systems.

none

This is like a PC that has no VGA card at all. You would not even be able to access it with the -vnc option. Also, this is different from the -nographic option which lets QEMU emulate a VGA card, but disables the SDL display.

SPICE

The SPICE project aims to provide a complete open source solution for remote access to virtual machines in a seamless way.

Enabling SPICE support on the host

The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:

$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent

The parameters have the following meaning:

  1. -device virtio-serial-pci adds a virtio-serial device
  2. -spice port=5930,disable-ticketing=on set TCP port 5930 for spice channels listening and allow client to connect without authentication
    Tip: Using Unix sockets instead of TCP ports does not involve using network stack on the host system. It does not imply that packets are encapsulated and decapsulated to use the network and the related protocol. The sockets are identified solely by the inodes on the hard drive. It is therefore considered better for performance. Use instead -spice unix=on,addr=/tmp/vm_spice.socket,disable-ticketing=on.
  3. -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 opens a port for spice vdagent in the virtio-serial device,
  4. -chardev spicevmc,id=spicechannel0,name=vdagent adds a spicevmc chardev for that port. It is important that the chardev= option of the virtserialport device matches the id= option given to the chardev option (spicechannel0 in this example). It is also important that the port name is com.redhat.spice.0, because that is the namespace where vdagent is looking for in the guest. And finally, specify name=vdagent so that spice knows what this channel is for.

Connecting to the guest with a SPICE client

A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:

  • virt-viewer — SPICE client recommended by the protocol developers, a subset of the virt-manager project.
https://virt-manager.org/ || virt-viewer
  • spice-gtk — SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.
https://www.spice-space.org/ || spice-gtk

For clients that run on smartphone or on other platforms, refer to the Other clients section in spice-space download.

Manually running a SPICE client

One way of connecting to a guest listening on Unix socket /tmp/vm_spice.socket is to manually run the SPICE client using $ remote-viewer spice+unix:///tmp/vm_spice.socket or $ spicy --uri="spice+unix:///tmp/vm_spice.socket", depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the -daemonize parameter.

Tip: To connect to the guest through SSH tunneling, the following type of command can be used:
$ ssh -fL 5999:localhost:5930 my.domain.org sleep 10; spicy -h 127.0.0.1 -p 5999

This example connects spicy to the local port 5999 which is forwarded through SSH to the guest's SPICE server located at the address my.domain.org, port 5930. Note the -f option that requests ssh to execute the command sleep 10 in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.

Running a SPICE client with QEMU

QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the -display spice-app parameter. This will use the system's default SPICE client as the viewer, determined by your mimeapps.list files.

Enabling SPICE support on the guest

For Arch Linux guests, for improved support for multiple monitors or clipboard sharing, the following packages should be installed:

  • spice-vdagent: Spice agent xorg client that enables copy and paste between client and X-session and more. (Refer to this issue, until fixed, for workarounds to get this to work on non-GNOME desktops.)
  • xf86-video-qxl: Xorg X11 qxl video driver
  • x-resizeAUR: Desktop environments other than GNOME do not react automatically when the SPICE client window is resized. This package uses a udev rule and xrandr to implement auto-resizing for all X11-based desktop environments and window managers.

For guests under other operating systems, refer to the Guest section in spice-space download.

Password authentication with SPICE

If you want to enable password authentication with SPICE you need to remove disable-ticketing from the -spice argument and instead add password=yourpassword. For example:

$ qemu-system-x86_64 -vga qxl -spice port=5900,password=yourpassword -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent

Your SPICE client should now ask for the password to be able to connect to the SPICE server.

TLS encrypted communication with SPICE

You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):

  • ca-cert.pem: the CA master certificate.
  • server-cert.pem: the server certificate signed with ca-cert.pem.
  • server-key.pem: the server private key.

An example of generation of self-signed certificates with your own generated CA for your server is shown in the Spice User Manual.

Afterwards, you can run QEMU with SPICE as explained above but using the following -spice argument: -spice tls-port=5901,password=yourpassword,x509-dir=/path/to/pki_certs, where /path/to/pki_certs is the directory path that contains the three needed files shown earlier.

It is now possible to connect to the server using virt-viewer:

$ remote-viewer spice://hostname?tls-port=5901 --spice-ca-file=/path/to/ca-cert.pem --spice-host-subject="C=XX,L=city,O=organization,CN=hostname" --spice-secure-channels=all

Keep in mind that the --spice-host-subject parameter needs to be set according to your server-cert.pem subject. You also need to copy ca-cert.pem to every client to verify the server certificate.

Tip: You can get the subject line of the server certificate in the correct format for --spice-host-subject (with entries separated by commas) using the following command:
$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'

The equivalent spice-gtk command is:

$ spicy -h hostname -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=XX,L=city,O=organization,CN=hostname" --spice-secure-channels=all

VNC

One can add the -vnc :X option to have QEMU redirect the VGA display to the VNC session. Substitute X for the number of the display (0 will then listen on 5900, 1 on 5901...).

$ qemu-system-x86_64 -vnc :0

An example is also provided in the #Starting QEMU virtual machines on boot section.

Warning: The default VNC server setup does not use any form of authentication. Any user can connect from any host.

Basic password authentication

An access password can be setup easily by using the password option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.

$ qemu-system-x86_64 -vnc :0,password -monitor stdio

In the QEMU monitor, password is set using the command change vnc password and then indicating the password.

The following command line directly runs vnc with a password:

$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio
Note: The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.

Audio

Creating an audio backend

The -audiodev flag sets the audio backend driver on the host and its options.

To list availabe audio backend drivers:

$ qemu-system-x86_64 -audiodev help

Their optional settings are detailed in the qemu(1) man page.

At the bare minimum, one need to choose an audio backend and set an id, for PulseAudio for example:

-audiodev pa,id=snd0

Using the audio backend

Intel HD Audio

For Intel HD Audio emulation, add both controller and codec devices. To list the available Intel HDA Audio devices:

$ qemu-system-x86_64 -device help | grep hda

Add the audio controller:

-device ich9-intel-hda

Also, add the audio codec and map it to a host audio backend id:

-device hda-output,audiodev=snd0

Intel 82801AA AC97

For AC97 emulation just add the audio card device and map it to a host audio backend id:

-device AC97,audiodev=snd0
Note:
  • If the audiodev backend is not provided, QEMU looks up for it and adds it automatically, this only works for a single audiodev. For example -device intel-hda -device hda-duplex will emulate intel-hda on the guest using the default audiodev backend.
  • Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with qemu-system-x86_64 -h | grep vga.

VirtIO sound

VirtIO sound is also available since QEMU 8.2.0. The usage is:

-device virtio-sound-pci,audiodev=my_audiodev -audiodev alsa,id=my_audiodev

More information can be found in QEMU documentation.

Installing virtio drivers

QEMU offers guests the ability to use paravirtualized block and network devices using the virtio drivers, which provide better performance and lower overhead.

  • A virtio block device requires the option -drive for passing a disk image, with parameter if=virtio:
$ qemu-system-x86_64 -drive file=disk_image,if=virtio
  • Almost the same goes for the network:
$ qemu-system-x86_64 -nic user,model=virtio-net-pci
Note: This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.

Preparing an Arch Linux guest

To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: virtio, virtio_pci, virtio_blk, virtio_net, and virtio_ring. For 32-bit guests, the specific "virtio" module is not necessary.

If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by mkinitcpio's autodetect hook. Otherwise use the MODULES array in /etc/mkinitcpio.conf to include the necessary modules and rebuild the initial ramdisk.

/etc/mkinitcpio.conf
MODULES=(virtio virtio_blk virtio_pci virtio_net)

Virtio disks are recognized with the prefix v (e.g. vda, vdb, etc.); therefore, changes must be made in at least /etc/fstab and /boot/grub/grub.cfg when booting from a virtio disk.

Tip: When referencing disks by UUID in both /etc/fstab and boot loader, nothing has to be done.

Further information on paravirtualization with KVM can be found here.

You might also want to install qemu-guest-agent to implement support for QMP commands that will enhance the hypervisor management capabilities.

Preparing a Windows guest

Virtio drivers for Windows

Windows does not come with the virtio drivers. The latest and stable versions of the drivers are regularly built by Fedora, details on downloading the drivers are given on virtio-win on GitHub. In the following sections we will mostly use the stable ISO file provided here: virtio-win.iso. Alternatively, use virtio-winAUR.

Block device drivers

New Install of Windows

The drivers need to be loaded during installation, the procedure is to load the ISO image with the virtio drivers in a cdrom device along with the primary disk device and the Windows ISO install media:

$ qemu-system-x86_64 ... \
-drive file=disk_image,index=0,media=disk,if=virtio \
-drive file=windows.iso,index=2,media=cdrom \
-drive file=virtio-win.iso,index=3,media=cdrom \
...

During the installation, at some stage, the Windows installer will ask "Where do you want to install Windows?", it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).

  • Select the option Load Drivers.
  • Uncheck the box for Hide drivers that are not compatible with this computer's hardware.
  • Click the browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".
  • Now browse to E:\viostor\[your-os]\amd64, select it, and confirm.

You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.

Change existing Windows virtual machine to use virtio

Modifying an existing Windows guest for booting from virtio disk requires that the virtio driver is loaded by the guest at boot time. We will therefore need to teach Windows to load the virtio driver at boot time before being able to boot a disk image in virtio mode.

To achieve that, first create a new disk image that will be attached in virtio mode and trigger the search for the driver:

$ qemu-img create -f qcow2 dummy.qcow2 1G

Run the original Windows guest with the boot disk still in IDE mode, the fake disk in virtio mode and the driver ISO image.

$ qemu-system-x86_64 -m 4G -drive file=disk_image,if=ide -drive file=dummy.qcow2,if=virtio -cdrom virtio-win.iso

Windows will detect the fake disk and look for a suitable driver. If it fails, go to Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click Update driver and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1).

Request Windows to boot in safe mode next time it starts up. This can be done using the msconfig.exe tool in Windows. In safe mode all the drivers will be loaded at boot time including the new virtio driver. Once Windows knows that the virtio driver is required at boot it will memorize it for future boot.

Once instructed to boot in safe mode, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:

$ qemu-system-x86_64 -m 4G -drive file=disk_image,if=virtio

You should boot in safe mode with virtio driver loaded, you can now return to msconfig.exe disable safe mode boot and restart Windows.

Note: If you encounter the blue screen of death using the if=virtio parameter, it probably means the virtio disk driver is not installed or not loaded at boot time, reboot in safe mode and check your driver configuration.

Network drivers

Installing virtio network drivers is a bit easier, simply add the -nic argument.

$ qemu-system-x86_64 -m 4G -drive file=windows_disk_image,if=virtio -nic user,model=virtio-net-pci -cdrom virtio-win.iso

Windows will detect the network adapter and try to find a driver for it. If it fails, go to the Device Manager, locate the network adapter with an exclamation mark icon (should be open), click Update driver and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.

Balloon driver

If you want to track you guest memory state (for example via virsh command dommemstat) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.

For this you will need to go to Device Manager, locate PCI standard RAM Controller in System devices (or unrecognized PCI controller from Other devices) and choose Update driver. In opened window you will need to choose Browse my computer... and select the CD-ROM (and do not forget the Include subdirectories checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command balloon memory_size, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to memory_size). However, you still will not be able to track guest memory state. In order to do this you will need to install Balloon service properly. For that open command line as administrator, go to the CD-ROM, Balloon directory and deeper, depending on your system and architecture. Once you are in amd64 (x86) directory, run blnsrv.exe -i which will do the installation. After that virsh command dommemstat should be outputting all supported values.

Preparing a FreeBSD guest

Install the emulators/virtio-kmod port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your /boot/loader.conf file:

virtio_load="YES"
virtio_pci_load="YES"
virtio_blk_load="YES"
if_vtnet_load="YES"
virtio_balloon_load="YES"

Then modify your /etc/fstab by doing the following:

# sed -ibak "s/ada/vtbd/g" /etc/fstab

And verify that /etc/fstab is consistent. If anything goes wrong, just boot into a rescue CD and copy /etc/fstab.bak back to /etc/fstab.

QEMU monitor

While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run help or ? in the QEMU monitor console or review the relevant section of the official QEMU documentation.

Accessing the monitor console

Graphical view

When using the std default graphics option, one can access the QEMU monitor by pressing Ctrl+Alt+2 or by clicking View > compatmonitor0 in the QEMU window. To return to the virtual machine graphical view either press Ctrl+Alt+1 or click View > VGA.

However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports.

Telnet

To enable telnet, run QEMU with the -monitor telnet:127.0.0.1:port,server,nowait parameter. When the virtual machine is started you will be able to access the monitor via telnet:

$ telnet 127.0.0.1 port
Note: If 127.0.0.1 is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen 0.0.0.0 as follows: -monitor telnet:0.0.0.0:port,server,nowait. Keep in mind that it is recommended to have a firewall configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.

UNIX socket

Run QEMU with the -monitor unix:socketfile,server,nowait parameter. Then you can connect with either socat, nmap or openbsd-netcat.

For example, if QEMU is run via:

$ qemu-system-x86_64 -monitor unix:/tmp/monitor.sock,server,nowait [...]

It is possible to connect to the monitor with:

$ socat - UNIX-CONNECT:/tmp/monitor.sock

Or with:

$ nc -U /tmp/monitor.sock

Alternatively with nmap:

$ ncat -U /tmp/monitor.sock

TCP

You can expose the monitor over TCP with the argument -monitor tcp:127.0.0.1:port,server,nowait. Then connect with netcat, either openbsd-netcat or gnu-netcat by running:

$ nc 127.0.0.1 port
Note: In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to 0.0.0.0 like explained in the telnet case. The same security warnings apply in this case as well.

Standard I/O

It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument -monitor stdio.

Sending keyboard presses to the virtual machine using the monitor console

Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the Ctrl+Alt+F* key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the sendkey command to forward the necessary keypresses to the virtual machine. For example:

(qemu) sendkey ctrl-alt-f2

Creating and managing snapshots via the monitor console

Note: This feature will only work when the virtual machine disk image is in qcow2 format. It will not work with raw images.

It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.

  • Use savevm name in order to create a snapshot with the tag name.
  • Use loadvm name to revert the virtual machine to the state of the snapshot name.
  • Use delvm name to delete the snapshot tagged as name.
  • Use info snapshots to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).

Running the virtual machine in immutable mode

It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the -snapshot parameter. When the disk image is written by the guest, changes will be saved in a temporary file in /tmp and will be discarded when QEMU halts.

However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:

(qemu) commit all

If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.

Pause and power options via the monitor console

Some operations of a physical machine can be emulated by QEMU using some monitor commands:

  • system_powerdown will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.
  • system_reset will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.
  • stop will pause the virtual machine.
  • cont will resume a virtual machine previously paused.

Taking screenshots of the virtual machine

Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:

(qemu) screendump file.ppm

QEMU machine protocol

The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the #QEMU monitor it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in qmp-commands.

Start QMP

The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the -qmp option. Here it is using for example the TCP port 4444:

$ qemu-system-x86_64 [...] -qmp tcp:localhost:4444,server,nowait

Then one way to communicate with the QMP agent is to use netcat:

nc localhost 4444
{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } 

At this stage, the only command that can be recognized is qmp_capabilities, so that QMP enters into command mode. Type:

{"execute": "qmp_capabilities"}

Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:

{"execute": "query-commands"}

Live merging of child image into parent image

It is possible to merge a running snapshot into its parent by issuing a block-commit command. In its simplest form the following line will commit the child into its parent:

{"execute": "block-commit", "arguments": {"device": "devicename"}}

Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.

Once the block-commit operation has completed, the event BLOCK_JOB_READY will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command block-job-complete:

{"execute": "block-job-complete", "arguments": {"device": "devicename"}}

Until such a command is issued, the commit operation remains active. After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.

Tip: The list of device and their names can be retrieved by executing the command query-block and parsing the results. The device name is in the device field, for example ide0-hd0 for the hard disk in this example:
{"execute": "query-block"}
{"return": [{"io-status": "ok", "device": "ide0-hd0", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } 

Live creation of a new snapshot

To create a new snapshot out of a running image, run the command:

{"execute": "blockdev-snapshot-sync", "arguments": {"device": "devicename","snapshot-file": "new_snapshot_name.qcow2"}}

This creates an overlay file named new_snapshot_name.qcow2 which then becomes the new active layer.

Tips and tricks

Improve virtual machine performance

There are a number of techniques that you can use to improve the performance of the virtual machine. For example:

  • Apply #Enabling KVM for full virtualization.
  • Use the -cpu host option to make QEMU emulate the host's exact CPU rather than a more generic CPU.
  • Especially for Windows guests, enable Hyper-V enlightenments: -cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time. See the QEMU documentation for more information and flags.
  • multiple cores can be assigned to the guest using the -smp cores=x,threads=y,sockets=1,maxcpus=z option. The threads parameter is used to assign SMT cores. Leaving a physical core for QEMU, the hypervisor and the host system to operate unimpeded is highly beneficial.
  • Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the -m option to assign more memory. For example, -m 1024 runs a virtual machine with 1024 MiB of memory.
  • If supported by drivers in the guest operating system, use virtio for network and/or block devices, see #Installing virtio drivers.
  • Use TAP devices instead of user-mode networking, see #Tap networking with QEMU.
  • If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an ext4 file system with the option barrier=0. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.
  • If you have a raw disk or partition, you may want to disable the cache:
    $ qemu-system-x86_64 -drive file=/dev/disk,if=virtio,cache=none
  • Use the native Linux AIO:
    $ qemu-system-x86_64 -drive file=disk_image,if=virtio,aio=native,cache.direct=on
  • If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling kernel same-page merging. See #Enabling KSM.
  • In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using -device virtio-balloon.
  • It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports NCQ, so multiple read or write requests can be outstanding at the same time:
    $ qemu-system-x86_64 -drive id=disk,file=disk_image,if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0

See https://www.linux-kvm.org/page/Tuning_KVM for more information.

Starting QEMU virtual machines on boot

With libvirt

If a virtual machine is set up with libvirt, it can be configured with virsh autostart or through the virt-manager GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".

With systemd service

To run QEMU virtual machines on boot, you can use following systemd unit and config.

/etc/systemd/system/qemu@.service
[Unit]
Description=QEMU virtual machine

[Service]
Environment="haltcmd=kill -INT $MAINPID"
EnvironmentFile=/etc/conf.d/qemu.d/%i
ExecStart=/usr/bin/qemu-system-x86_64 -name %i -enable-kvm -m 512 -nographic $args
ExecStop=/usr/bin/bash -c ${haltcmd}
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'

[Install]
WantedBy=multi-user.target
Note: This service will wait for the console port to be released, which means that the virtual machine has been shutdown, to graciously end.

Then create per-VM configuration files, named /etc/conf.d/qemu.d/vm_name, with the variables args and haltcmd set. Example configs:

/etc/conf.d/qemu.d/one
args="-hda /dev/vg0/vm1 -serial telnet:localhost:7000,server,nowait,nodelay \
 -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"

haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat
/etc/conf.d/qemu.d/two
args="-hda /srv/kvm/vm2 -serial telnet:localhost:7001,server,nowait,nodelay -vnc :1"

haltcmd="ssh powermanager@vm2 sudo poweroff"

The description of the variables is the following:

  • args - QEMU command line arguments to be used.
  • haltcmd - Command to shut down a virtual machine safely. In the first example, the QEMU monitor is exposed via telnet using -monitor telnet:.. and the virtual machines are powered off via ACPI by sending system_powerdown to monitor with the nc command. In the other example, SSH is used.

To set which virtual machines will start on boot-up, enable the qemu@vm_name.service systemd unit.

Mouse integration

To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options -usb -device usb-tablet. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:

$ qemu-system-x86_64 -hda disk_image -m 512 -usb -device usb-tablet

If that does not work, try using -vga qxl parameter, also look at the instructions #Mouse cursor is jittery or erratic.

Pass-through host USB device

It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the lsusb command. For example:

$ lsusb
...
Bus 003 Device 007: ID 0781:5406 SanDisk Corp. Cruzer Micro U3

The outputs in bold above will be useful to identify respectively the host_bus and host_addr or the vendor_id and product_id.

In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 1.1 USB 2 USB 3) controller with the option -device usb-ehci,id=ehci or -device qemu-xhci,id=xhci respectively and then attach the physical device to it with the option -device usb-host,... We will consider that controller_id is either ehci or xhci for the rest of this section.

Then, there are two ways to connect to the USB of the host with qemu:

  1. Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is:
    -device usb-host,bus=controller_id.0,vendorid=0xvendor_id,productid=0xproduct_id
    Applied to the device used in the example above, it becomes:
    -device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x0781,productid=0x5406
    One can also add the ...,port=port_number setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one wants to add multiple USB devices to the virtual machine. Another option is to use the new hostdevice property of usb-host which is available since QEMU 5.1.0, the syntax is:
    -device qemu-xhci,id=xhci -device usb-host,hostdevice=/dev/bus/usb/003/007
  2. Attach whatever is connected to a given USB bus and address, the syntax is:
    -device usb-host,bus=controller_id.0,hostbus=host_bus,host_addr=host_addr
    Applied to the bus and the address in the example above, it becomes:
    -device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus=3,hostaddr=7

See QEMU/USB emulation for more information.

Note: If you encounter permission errors when running QEMU, see udev#About udev rules for information on how to set permissions of the device.

USB redirection with SPICE

When using #SPICE it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned -usbdevice method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.

We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:

-device ich9-usb-ehci1,id=usb \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3

See SPICE/usbredir for more information.

Both spicy from spice-gtk (Input > Select USB Devices for redirection) and remote-viewer from virt-viewer (File > USB device selection) support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the #SPICE section for more information).

Warning: Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.

Automatic USB forwarding with udev

Normally, forwarded devices must be available at the boot time of the virtual machine to be forwarded. If that device is disconnected, it will not be forwarded anymore.

You can use udev rules to automatically attach a device when it comes online. Create a hostdev entry somewhere on disk. chown it to root to prevent other users modifying it.

/usr/local/hostdev-mydevice.xml
<hostdev mode='subsystem' type='usb'>
  <source>
    <vendor id='0x03f0'/>
    <product id='0x4217'/>
  </source>
</hostdev>

Then create a udev rule which will attach/detach the device:

/usr/lib/udev/rules.d/90-libvirt-mydevice
ACTION=="add", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="03f0", \
    ENV{ID_MODEL_ID}=="4217", \
    RUN+="/usr/bin/virsh attach-device GUESTNAME /usr/local/hostdev-mydevice.xml"
ACTION=="remove", \
    SUBSYSTEM=="usb", \
    ENV{ID_VENDOR_ID}=="03f0", \
    ENV{ID_MODEL_ID}=="4217", \
    RUN+="/usr/bin/virsh detach-device GUESTNAME /usr/local/hostdev-mydevice.xml"

Source and further reading.

Enabling KSM

Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.

Note: Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see Wikipedia:Kernel same-page merging.

To enable KSM:

# echo 1 > /sys/kernel/mm/ksm/run

To make it permanent, use systemd's temporary files:

/etc/tmpfiles.d/ksm.conf
w /sys/kernel/mm/ksm/run - - - - 1

If KSM is running, and there are pages to be merged (i.e. at least two similar virtual machines are running), then /sys/kernel/mm/ksm/pages_shared should be non-zero. See https://docs.kernel.org/admin-guide/mm/ksm.html for more information.

Tip: An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory:
$ grep -r . /sys/kernel/mm/ksm/

Multi-monitor support

The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the qxl.heads=N kernel parameter.

The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing -vga qxl by -vga none -device qxl-vga,vgamem_mb=32. If you ever increase vgamem_mb beyond 64M, then you also have to increase the vram_size_mb option.

Custom display resolution

A custom display resolution can be set with -device VGA,edid=on,xres=1280,yres=720 (see EDID and display resolution).

Copy and paste

SPICE

One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client. One needs to follow the steps described in #SPICE. A guest run this way will support copy paste with the host.

qemu-vdagent

QEMU provides its own implementation of the spice vdagent chardev called qemu-vdagent. It interfaces with the spice-vdagent guest service and allows the guest and host share a clipboard.

To access this shared clipboard with QEMU's GTK display, you will need to compile QEMU from source with the --enable-gtk-clipboard configure parameter. It is sufficient to replace the installed qemu-ui-gtk package.

Note:
  • Feature request FS#79716 submitted to enable the functionality in the official package.
  • The shared clipboard in qemu-ui-gtk has been pushed back to experimental as it can freeze guests under certain circumstances. A fix has been proposed to solve the issue upstream.

Add the following QEMU command line arguments:

-device virtio-serial,packed=on,ioeventfd=on
-device virtserialport,name=com.redhat.spice.0,chardev=vdagent0
-chardev qemu-vdagent,id=vdagent0,name=vdagent,clipboard=on,mouse=off

These arguments are also valid if converted to libvirt form.

Note: While the spicevmc chardev will start the spice-vdagent service of the guest automatically, the qemu-vdagent chardev may not.

On linux guests, you may start the spice-vdagent.service user unit manually. On Windows guests, set the spice-agent startup type to automatic.

Windows-specific notes

QEMU can run any version of Windows from Windows 95 through Windows 11.

It is possible to run Windows PE in QEMU.

Fast startup

Note: An administrator account is required to change power settings.

For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following forum page, as it causes the guest to hang during every other boot.

Fast Startup may also need to be disabled for changes to the -smp option to be properly applied.

Remote Desktop Protocol

If you use a MS Windows guest, you might want to use RDP to connect to your guest virtual machine. If you are using a VLAN or are not in the same network as the guest, use:

$ qemu-system-x86_64 -nographic -nic user,hostfwd=tcp::5555-:3389

Then connect with either rdesktop or freerdp to the guest. For example:

$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan

Clone Linux system installed on physical equipment

Linux system installed on physical equipment can be cloned for running on a QEMU virtual machine. See Clone Linux system from hardware for QEMU virtual machine

Chrooting into arm/arm64 environment from x86_64

Sometimes it is easier to work directly on a disk image instead of the real ARM based device. This can be achieved by mounting an SD card/storage containing the root partition and chrooting into it.

Another use case for an ARM chroot is building ARM packages on an x86_64 machine. Here, the chroot environment can be created from an image tarball from Arch Linux ARM - see [3] for a detailed description of this approach.

Either way, from the chroot it should be possible to run pacman and install more packages, compile large libraries etc. Since the executables are for the ARM architecture, the translation to x86 needs to be performed by QEMU.

Install qemu-user-static on the x86_64 machine/host, and qemu-user-static-binfmt to register the qemu binaries to binfmt service.

qemu-user-static is used to allow the execution of compiled programs from other architectures. This is similar to what is provided by qemu-emulators-full, but the "static" variant is required for chroot. Examples:

qemu-arm-static path_to_sdcard/usr/bin/ls
qemu-aarch64-static path_to_sdcard/usr/bin/ls

These two lines execute the ls command compiled for 32-bit ARM and 64-bit ARM respectively. Note that this will not work without chrooting, because it will look for libraries not present in the host system.

qemu-user-static allows automatically prefixing the ARM exectuable with qemu-arm-static or qemu-aarch64-static.

Make sure that the ARM executable support is active:

$ ls /proc/sys/fs/binfmt_misc
qemu-aarch64  qemu-arm	  qemu-cris  qemu-microblaze  qemu-mipsel  qemu-ppc64	    qemu-riscv64  qemu-sh4    qemu-sparc	qemu-sparc64  status
qemu-alpha    qemu-armeb  qemu-m68k  qemu-mips	      qemu-ppc	   qemu-ppc64abi32  qemu-s390x	  qemu-sh4eb  qemu-sparc32plus	register

Each executable must be listed.

If it is not active, restart systemd-binfmt.service.

Mount the SD card to /mnt/sdcard (the device name may be different).

# mount --mkdir /dev/mmcblk0p2 /mnt/sdcard

Mount boot partition if needed (again, use the suitable device name):

# mount /dev/mmcblk0p1 /mnt/sdcard/boot

Finally chroot into the SD card root as described in Change root#Using chroot:

# chroot /mnt/sdcard /bin/bash

Alternatively, you can use arch-chroot from arch-install-scripts, as it will provide an easier way to get network support:

# arch-chroot /mnt/sdcard /bin/bash

You can also use systemd-nspawn to chroot into the ARM environment:

# systemd-nspawn -D /mnt/sdcard -M myARMMachine --bind-ro=/etc/resolv.conf

--bind-ro=/etc/resolv.conf is optional and gives a working network DNS inside the chroot

sudo in chroot

If you install sudo in the chroot and receive the following error when trying to use it:

sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?

then you may need to modify the binfmt flags, for example for aarch64:

# cp /usr/lib/binfmt.d/qemu-aarch64-static.conf /etc/binfmt.d/
# vi /etc/binfmt.d/qemu-aarch64-static.conf

and add a C at the end of this file:

:qemu-aarch64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-aarch64-static:FPC

Then restart systemd-binfmt.service and check that the changes have taken effect (note the C on the flags line):

# cat /proc/sys/fs/binfmt_misc/qemu-aarch64
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: POCF
offset 0
magic 7f454c460201010000000000000000000200b700
mask ffffffffffffff00fffffffffffffffffeffffff

See the "flags" section of the kernel binfmt documentation for more information.

Not grabbing mouse input

This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.

Reason: It is not explained what the option actually does. Is it causing or avoiding the side effect? (Discuss in Talk:QEMU)

Tablet mode has side effect of not grabbing mouse input in QEMU window:

-usb -device usb-tablet

It works with several -vga backends one of which is virtio.

Troubleshooting

This article or section is a candidate for merging with QEMU/Troubleshooting.

Notes: This section is long enough to be split into a dedicated subpage. (Discuss in Talk:QEMU)

Mouse cursor is jittery or erratic

If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:

$ export SDL_VIDEO_X11_DGAMOUSE=0

If this helps, you can add this to your ~/.bashrc file.

No visible Cursor

Add -display default,show-cursor=on to QEMU's options to see a mouse cursor.

If that still does not work, make sure you have set your display device appropriately, for example: -vga qxl.

Another option to try is -usb -device usb-tablet as mentioned in #Mouse integration. This overrides the default PS/2 mouse emulation and synchronizes pointer location between host and guest as an added bonus.

Two different mouse cursors are visible

Apply the tip #Mouse integration.

Keyboard issues when using VNC

When using VNC, you might experience keyboard problems described (in gory details) here. The solution is not to use the -k option on QEMU, and to use gvncviewer from gtk-vnc. See also this message posted on libvirt's mailing list.

Keyboard seems broken or the arrow keys do not work

Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in /usr/share/qemu/keymaps/.

$ qemu-system-x86_64 -k keymap disk_image

Could not read keymap file

qemu-system-x86_64: -display vnc=0.0.0.0:0: could not read keymap file: 'en'

is caused by an invalid keymap passed to the -k argument. For example, en is invalid, but en-us is valid - see /usr/share/qemu/keymaps/.

Guest display stretches on window resize

To restore default window size, press Ctrl+Alt+u.

ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy

If an error message like this is printed when starting QEMU with -enable-kvm option:

ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy
failed to initialize KVM: Device or resource busy

that means another hypervisor is currently running. It is not recommended or possible to run several hypervisors in parallel.

libgfapi error message

The error message displayed at startup:

Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory

Install glusterfs or ignore the error message as GlusterFS is a optional dependency.

Kernel panic on LIVE-environments

If you start a live-environment (or better: booting a system) you may encounter this:

[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)

or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo). Try starting the virtual machine with the -m VALUE switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.

Windows 7 guest suffers low-quality sound

Using the hda audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to ac97 by passing the -soundhw ac97 arguments to QEMU and installing the AC97 driver from Realtek AC'97 Audio Codecs in the guest may solve the problem. See Red Hat Bugzilla – Bug 1176761 for more information.

Could not access KVM kernel module: Permission denied

If you encounter the following error:

libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied

Systemd 234 assigns a dynamic ID for the kvm group (see FS#54943). To avoid this error, you need edit the file /etc/libvirt/qemu.conf and change the line with group = "78" to group = "kvm".

"System Thread Exception Not Handled" when booting a Windows virtual machine

Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to core2duo.

Certain Windows games/applications crashing/causing a bluescreen

Occasionally, applications running in the virtual machine may crash unexpectedly, whereas they would run normally on a physical machine. If, while running dmesg -wH as root, you encounter an error mentioning MSR, the reason for those crashes is that KVM injects a General protection fault (GPF) when the guest tries to access unsupported Model-specific registers (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the ignore_msrs=1 option to the KVM module, which will ignore unimplemented MSRs.

/etc/modprobe.d/kvm.conf
...
options kvm ignore_msrs=1
...

Cases where adding this option might help:

  • GeForce Experience complaining about an unsupported CPU being present.
  • StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with KMODE_EXCEPTION_NOT_HANDLED. The blue screen information does not identify a driver file in these cases.
Warning: While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the virtual machine or other virtual machine.

Applications in the virtual machine experience long delays or take a long time to start

This article or section is out of date.

Reason: No longer true since kernel 5.6 (Discuss in Talk:QEMU)

This may be caused by insufficient available entropy in the virtual machine. Consider allowing the guest to access the hosts's entropy pool by adding a VirtIO RNG device to the virtual machine, or by installing an entropy generating daemon such as Haveged.

Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.

High interrupt latency and microstuttering

This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.

QXL video causes low resolution

QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [4] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.

As a workaround, create your device in this form:

-device qxl-vga,max_outputs=1...

Virtual machine not booting when using a Secure Boot enabled OVMF

OVMF_CODE.secboot.4m.fd and OVMF_CODE.secboot.fd files from edk2-ovmf are built with SMM support. If S3 support is not disabled in the virtual machine, then the virtual machine might not boot at all.

Add the -global ICH9-LPC.disable_s3=1 option to the qemu command.

See FS#59465 and https://github.com/tianocore/edk2/blob/master/OvmfPkg/README for more details and the required options to use Secure Boot in QEMU.

Virtual machine not booting into Arch ISO

When trying to boot the virtual machine for the first time from an Arch ISO image, the boot process hangs. Adding console=ttyS0 to kernel boot options by pressing e in the boot menu you will get more boot messages and the following error:

:: Mounting '/dev/disk/by-label/ARCH_202204' to '/run/archiso/bootmnt'
Waiting 30 seconds for device /dev/disk/by-label/ARCH_202204 ...
ERROR: '/dev/disk/by-label/ARCH_202204' device did not show up after 30 seconds...
   Falling back to interactive prompt
   You can try to fix the problem manually, log out when you are finished
sh: can't access tty; job control turned off

The error message does not give a good clue as to what the real issue is. The problem is with the default 128MB of RAM that QEMU allocates to the virtual machine. Increasing the limit to 1024MB with -m 1024 solves the issue and lets the system boot. You can continue installing Arch Linux as usual after that. Once the installation is complete, the memory allocation for the virtual machine can be decreased. The need for 1024MB is due to RAM disk requirements and size of the installation media. See this message on the arch-releng mailing list and this forum thread.

Guest CPU interrupts are not firing

If you are writing your own operating system by following the OSDev wiki, or are simply getting stepping through the guest architecture assembly code using QEMU's gdb interface using the -s flag, it is useful to know that many emulators, QEMU included, usually implement some CPU interrupts leaving many hardware interrupts unimplemented. One way to know if your code is firing an interrupt, is by using:

-d int

to enable showing interrupts/exceptions on stdout.

To see what other guest debugging features QEMU has to offer, see:

qemu-system-x86_64 -d help

or replace x86_64 for your chosen guest architecture.

KDE with sddm does not start spice-vdagent at login automatically

Remove or comment out X-GNOME-Autostart-Phase=WindowManager from /etc/xdg/autostart/spice-vdagent.desktop. [5]

Error starting domain: Requested operation is not valid: network 'default' is not active

If for any reason the default network is deactivated, you will not be able to start any guest virtual machines which are configured to use the network. Your first attempt can be simply trying to start the network with virsh.

# virsh net-start default

For additional troubleshooting steps, see [6].

See also