https://wiki.archlinux.org/api.php?action=feedcontributions&user=Gima&feedformat=atomArchWiki - User contributions [en]2024-03-29T11:49:23ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Sysctl&diff=593593Sysctl2019-12-30T17:12:15Z<p>Gima: /* Virtual memory */ Rewrite "Writeback" and combine it with "Small periodic system freezes" from "Troubleshooting".</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Kernel]]<br />
[[Category:Commands]]<br />
[[ja:Sysctl]]<br />
[[Wikipedia:sysctl|sysctl]] is a tool for examining and changing [[kernel parameters]] at runtime (package {{Pkg|procps-ng}} in [[official repositories]]). sysctl is implemented in [[Wikipedia:procfs|procfs]], the virtual process file system at {{ic|/proc/}}.<br />
<br />
== Configuration ==<br />
<br />
{{Note|From version 207 and 21x, [[systemd]] only applies settings from {{ic|/etc/sysctl.d/*.conf}} and {{ic|/usr/lib/sysctl.d/*.conf}}. If you had customized {{ic|/etc/sysctl.conf}}, you need to rename it as {{ic|/etc/sysctl.d/99-sysctl.conf}}. If you had e.g. {{ic|/etc/sysctl.d/foo}}, you need to rename it to {{ic|/etc/sysctl.d/foo.conf}}.}}<br />
<br />
The '''sysctl''' preload/configuration file can be created at {{ic|/etc/sysctl.d/99-sysctl.conf}}. For [[systemd]], {{ic|/etc/sysctl.d/}} and {{ic|/usr/lib/sysctl.d/}} are drop-in directories for kernel sysctl parameters. The naming and source directory decide the order of processing, which is important since the last parameter processed may override earlier ones. For example, parameters in a {{ic|/usr/lib/sysctl.d/50-default.conf}} will be overriden by equal parameters in {{ic|/etc/sysctl.d/50-default.conf}} and any configuration file processed later from both directories. <br />
<br />
To load all configuration files manually, execute:<br />
<br />
# sysctl --system <br />
<br />
which will also output the applied hierarchy. A single parameter file can also be loaded explicitly with:<br />
<br />
# sysctl --load=''filename.conf''<br />
<br />
See [http://0pointer.de/blog/projects/the-new-configuration-files the new configuration files] and more specifically {{man|5|sysctl.d}} for more information.<br />
<br />
The parameters available are those listed under {{ic|/proc/sys/}}. For example, the {{ic|kernel.sysrq}} parameter refers to the file {{ic|/proc/sys/kernel/sysrq}} on the file system. The {{ic|sysctl --all}} command can be used to display all currently available values.<br />
<br />
{{Note|If you have the kernel documentation installed ({{Pkg|linux-docs}}), you can find detailed information about sysctl settings in {{ic|/usr/lib/modules/$(uname -r)/build/Documentation/sysctl/}}. It is highly recommended reading these before changing sysctl settings.}}<br />
<br />
Settings can be changed through file manipulation or using the {{ic|sysctl}} utility. For example, to temporarily enable the [[Wikipedia:Magic SysRq key|magic SysRq key]]:<br />
<br />
# sysctl kernel.sysrq=1<br />
<br />
or:<br />
<br />
# echo "1" > /proc/sys/kernel/sysrq<br />
<br />
See [https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html Linux kernel documentation] for details about {{ic|kernel.sysrq}}.<br />
<br />
To preserve changes between reboots, add or modify the appropriate lines in {{ic|/etc/sysctl.d/99-sysctl.conf}} or another applicable parameter file in {{ic|/etc/sysctl.d/}}.<br />
<br />
{{Tip|Some parameters that can be applied may depend on kernel modules which in turn might not be loaded. For example parameters in {{ic|/proc/sys/net/bridge/*}} depend on the {{ic|br_netfilter}} module. If it is not loaded at runtime (or after a reboot), those will ''silently'' not be applied. See [[Kernel modules]].}}<br />
<br />
== Security ==<br />
<br />
See [[Security#Kernel hardening]].<br />
<br />
== Networking ==<br />
<br />
=== Improving performance ===<br />
<br />
==== Increasing the size of the receive queue. ====<br />
<br />
The received frames will be stored in this queue after taking them from the ring buffer on the network card.<br />
<br />
Increasing this value for high speed cards may help prevent losing packets:<br />
<br />
net.core.netdev_max_backlog = 100000<br />
net.core.netdev_budget = 50000<br />
net.core.netdev_budget_usecs = 5000<br />
<br />
{{Note|In real time application like SIP routers, this option requires a high speed CPU otherwise the data in the queue will be out of date.}}<br />
<br />
==== Increase the maximum connections ====<br />
<br />
The upper limit on how many connections the kernel will accept (default 128):<br />
<br />
net.core.somaxconn = 1024<br />
<br />
{{Warning|Increasing this value may only increase performance on high-loaded servers and may cause as slow processing rate (e.g. a single threaded blocking server) or insufficient number of worker threads/processes [https://serverfault.com/questions/518862/will-increasing-net-core-somaxconn-make-a-difference/519152].}}<br />
<br />
==== Increase the memory dedicated to the network interfaces ====<br />
<br />
The default the Linux network stack is not configured for high speed large file transfer across WAN links (i.e. handle more network packets) and setting the correct values may save memory resources:<br />
<br />
net.core.rmem_default = 1048576<br />
net.core.rmem_max = 16777216<br />
net.core.wmem_default = 1048576<br />
net.core.wmem_max = 16777216<br />
net.core.optmem_max = 65536<br />
net.ipv4.tcp_rmem = 4096 1048576 2097152<br />
net.ipv4.tcp_wmem = 4096 65536 16777216<br />
<br />
It is also possible increase the default {{ic|4096}} UDP limits:<br />
<br />
net.ipv4.udp_rmem_min = 8192<br />
net.ipv4.udp_wmem_min = 8192<br />
<br />
See the following sources for more information and recommend values:<br />
<br />
* http://www.nateware.com/linux-network-tuning-for-2013.html<br />
* https://blog.cloudflare.com/the-story-of-one-latency-spike/<br />
<br />
==== Enable TCP Fast Open ====<br />
<br />
{{Expansion|Mention the option to "enable all listeners to support Fast Open by default without explicit TCP_FASTOPEN socket option", i.e. value {{ic|1027}} (0x1+0x2+0x400).}}<br />
<br />
TCP Fast Open is an extension to the transmission control protocol (TCP) that helps reduce network latency by enabling data to be exchanged during the sender’s initial TCP SYN [https://www.keycdn.com/support/tcp-fast-open/]. Using the value {{ic|3}} instead of the default {{ic|1}} allows TCP Fast Open for both incoming and outgoing connections:<br />
<br />
net.ipv4.tcp_fastopen = 3<br />
<br />
==== Tweak the pending connection handling ====<br />
<br />
{{ic|tcp_max_syn_backlog}} is the maximum queue length of pending connections 'Waiting Acknowledgment'.<br />
<br />
In the event of a synflood DOS attack, this queue can fill up pretty quickly, at which point [[Wikipedia:SYN cookies|TCP SYN cookies]] will kick in allowing your system to continue to respond to legitimate traffic, and allowing you to gain access to block malicious IPs.<br />
<br />
If the server suffers from overloads at peak times, you may want to increase this value a little bit:<br />
<br />
net.ipv4.tcp_max_syn_backlog = 30000<br />
<br />
{{ic|tcp_max_tw_buckets}} is the maximum number of sockets in TIME_WAIT state.<br />
<br />
After reaching this number the system will start destroying the socket that are in this state.<br />
<br />
Increase this to prevent simple DOS attacks:<br />
<br />
net.ipv4.tcp_max_tw_buckets = 2000000<br />
<br />
{{ic|tcp_tw_reuse}} sets whether TCP should reuse an existing connection in the TIME-WAIT state for a new outgoing connection if the new timestamp is strictly bigger than the most recent timestamp recorded for the previous connection.<br />
<br />
This helps avoid from running out of available network sockets:<br />
<br />
net.ipv4.tcp_tw_reuse = 1<br />
<br />
Specify how many seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification, but required to prevent denial-of-service attacks. In Linux 2.2, the default value was 180 [https://access.redhat.com/solutions/41776]:<br />
<br />
net.ipv4.tcp_fin_timeout = 10<br />
<br />
{{ic|tcp_slow_start_after_idle}} sets whether TCP should start at the default window size only for new connections or also for existing connections that have been idle for too long.<br />
<br />
This setting kills persistent single connection performance and could be turned off:<br />
<br />
net.ipv4.tcp_slow_start_after_idle = 0<br />
<br />
==== Change TCP keepalive parameters ====<br />
<br />
[[Wikipedia:Keepalive#TCP keepalive|TCP keepalive]] is a mechanism for TCP connections that help to determine whether the other end has stopped responding or not. TCP will send the keepalive probe that contains null data to the network peer several times after a period of idle time. If the peer does not respond, the socket will be closed automatically. By default, TCP keepalive process waits for two hours (7200 secs) for socket activity before sending the first keepalive probe, and then resend it every 75 seconds. As long as there is TCP/IP socket communications going on and active, no keepalive packets are needed.<br />
<br />
{{Note|With the following settings, your application will detect dead TCP connections after 120 seconds (60s + 10s + 10s + 10s + 10s + 10s + 10s).}}<br />
<br />
net.ipv4.tcp_keepalive_time = 60<br />
net.ipv4.tcp_keepalive_intvl = 10<br />
net.ipv4.tcp_keepalive_probes = 6<br />
<br />
==== Enable MTU probing ====<br />
<br />
The longer the [[Wikipedia:Maximum transmission unit|maximum transmission unit (MTU)]] the better for performance, but the worse for reliability.<br />
<br />
This is because a lost packet means more data to be retransmitted and because many routers on the Internet cannot deliver very long packets:<br />
<br />
net.ipv4.tcp_mtu_probing = 1<br />
<br />
See https://blog.cloudflare.com/path-mtu-discovery-in-practice/ for more information.<br />
<br />
==== TCP timestamps ====<br />
<br />
{{Warning|TCP timestamps protect against wrapping sequence numbers (at gigabit speeds) and round trip time calculation implemented in TCP. It is not recommended to turn off TCP timestamps as it may cause a security risk [https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf].}}<br />
<br />
Disabling timestamp generation will reduce spikes and may give a performance boost on gigabit networks:<br />
<br />
net.ipv4.tcp_timestamps = 0<br />
<br />
==== Enable BBR ====<br />
<br />
The [[Wikipedia:TCP congestion control#TCP BBR|BBR congestion control algorithm]] can help achieve higher bandwidths and lower latencies for internet traffic.<br />
First, load the {{ic|tcp_bbr}} module.<br />
<br />
net.core.default_qdisc = fq<br />
net.ipv4.tcp_congestion_control = bbr<br />
<br />
=== TCP/IP stack hardening ===<br />
<br />
The following specifies a parameter set to tighten network security options of the kernel for the IPv4 protocol and related IPv6 parameters where an equivalent exists. <br />
<br />
For some use-cases, for example using the system as a [[router]], other parameters may be useful or required as well. <br />
<br />
==== TCP SYN cookie protection ====<br />
<br />
Helps protect against SYN flood attacks. Only kicks in when {{ic|net.ipv4.tcp_max_syn_backlog}} is reached:<br />
<br />
net.ipv4.tcp_syncookies = 1<br />
<br />
==== TCP rfc1337 ====<br />
<br />
{{Accuracy|This does not seem to be part of the TCP standard? The description may not be accurate. [https://serverfault.com/questions/787624/why-isnt-net-ipv4-tcp-rfc1337-enabled-by-default]|section=net.ipv4.tcp_rfc1337}}<br />
<br />
Protect against tcp time-wait assassination hazards, drop RST packets for sockets in the time-wait state. Not widely supported outside of Linux, but conforms to RFC:<br />
<br />
net.ipv4.tcp_rfc1337 = 1<br />
<br />
==== Reverse path filtering ====<br />
<br />
By enabling reverse path filtering, the kernel will do source validation of the packets received from all the interfaces on the machine. This can protect from attackers that are using IP spoofing methods to do harm.<br />
<br />
The kernel's default value is {{ic|0}} (no source validation), but systemd ships {{ic|/usr/lib/sysctl.d/50-default.conf}} that sets {{ic|net.ipv4.conf.all.rp_filter}} to {{ic|2}} (loose mode)[https://github.com/systemd/systemd/pull/10971].<br />
<br />
The following will set the reverse path filtering mechanism to value {{ic|1}} (strict mode):<br />
<br />
net.ipv4.conf.default.rp_filter = 1<br />
net.ipv4.conf.all.rp_filter = 1<br />
<br />
The relationship and behavior of {{ic|net.ipv4.conf.default.*}}, {{ic|net.ipv4.conf.''interface''.*}} and {{ic|net.ipv4.conf.all.*}} is explained in [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ip-sysctl.txt].<br />
<br />
==== Log martian packets ====<br />
<br />
A [[Wikipedia:Martian packet|martian packet]] is an IP packet which specifies a source or destination address that is reserved for special-use by Internet Assigned Numbers Authority (IANA). See [[wikipedia:Reserved_IP_addresses|Reserved IP addresses]] for more details.<br />
<br />
Often martian and unroutable packet may be used for a dangerous purpose. Logging these packets for further inspection may be useful [https://www.cyberciti.biz/faq/linux-log-suspicious-martian-packets-un-routable-source-addresses/]:<br />
<br />
net.ipv4.conf.default.log_martians = 1<br />
net.ipv4.conf.all.log_martians = 1<br />
<br />
{{Note|This can fill up your logs with a lot of information, it is advisable to only enable this for testing.}}<br />
<br />
==== Disable ICMP redirects ====<br />
<br />
To disable ICMP redirect acceptance:<br />
<br />
net.ipv4.conf.all.accept_redirects = 0<br />
net.ipv4.conf.default.accept_redirects = 0<br />
net.ipv4.conf.all.secure_redirects = 0<br />
net.ipv4.conf.default.secure_redirects = 0<br />
net.ipv6.conf.all.accept_redirects = 0<br />
net.ipv6.conf.default.accept_redirects = 0<br />
<br />
To disable ICMP redirect sending when on a non router:<br />
<br />
net.ipv4.conf.all.send_redirects = 0<br />
net.ipv4.conf.default.send_redirects = 0<br />
<br />
==== Ignore ICMP echo requests ====<br />
<br />
To disable ICMP echo (aka ping) requests:<br />
<br />
net.ipv4.icmp_echo_ignore_all = 1<br />
net.ipv6.icmp.echo_ignore_all = 1<br />
<br />
{{Note|Beware this may cause issues with monitoring tools and/or applications relying on ICMP echo responses.}}<br />
<br />
=== Other ===<br />
<br />
==== Allow unprivileged users to create IPPROTO_ICMP sockets ====<br />
<br />
The IPPROTO_ICMP socket type adds the possibility to send ICMP_ECHO messages and receive corresponding ICMP_ECHOREPLY messages without the need to open a raw socket, an operation which requires the CAP_NET_RAW capability or the SUID bit with a proper privileged owner. These ICMP_ECHO messages are sent by the ping application thus making the IPPROTO_ICMP socket also known as ping socket in addition to ICMP Echo socket.<br />
<br />
{{ic|ping_group_range}} determines the GID range of groups which their users are allowed to create IPPROTO_ICMP sockets. Additionally, the owner of the CAP_NET_RAW capability is also allowed to create IPPROTO_ICMP sockets.<br/>By default this range is {{ic|1 0}} which means no one is allowed to create IPPROTO_ICMP sockets except root.<br/><br />
To take advantage of this setting programs which currently uses raw sockets need to ported to use IPPROTO_ICMP sockets instead.<br/>For example, QEMU uses IPPROTO_ICMP for SLIRP aka User-mode networking, so allowing the user running QEMU to create IPPROTO_ICMP sockets means it's possible to ping from the guest.<br />
<br />
To allow only users which are members of the group with GID 100 to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 100 100<br />
<br />
To allow all the users in the system to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 0 65535<br />
<br />
== Virtual memory ==<br />
For more information regarding the tunable parameters of Linux kernel's virtual memory subsystem, please see [https://www.kernel.org/doc/Documentation/sysctl/vm.txt Documentation for /proc/sys/vm/*].<br />
<br />
=== Writeback ===<br />
The kernel buffers block device writes in memory until a certain threshold is reached. This triggers a "writeback" where the data is actually written. Heavy I/O during a writeback manifests itself as applications freezing and the system becoming unpleasant to use for a while.<br />
<br />
'''Quick solution'''<br />
cat /etc/sysctl.d/60-my-virtmem-dirty-tuning.conf<br />
<br />
# after 16MiB of dirty bytes, the dirtying process will start writeback:<br />
vm.dirty_bytes=16777216<br />
<br />
# after 32MiB of dirty bytes, the kernel will start writeback:<br />
vm.dirty_background_bytes=33554432<br />
<br />
'''Explanation'''<br><br />
Two tunables ("dirty" and "dirty_background") specify how much data will be buffered in memory before a writeback takes place.<br />
<br />
The "dirty" variant is controlled by {{ic|vm.dirty_bytes}} and {{ic|vm.dirty_ratio}}. This specifies when the program issuing the I/O will start doing writeback (it will freeze).<br><br />
The "dirty_background" variant is controlled by {{ic|vm.dirty_background_bytes}} and {{ic|vm.dirty_background_ratio}}. This specifies when the kernel itself will start doing writeback (causes system-wide freezing).<br />
<br />
Note: "_bytes" is counterpart to "_ratio". One specifies the amount in bytes and the other in percentage of memory that is buffered before a writeback starts. Only one or the other can be specified (the last one written to takes effect).<br />
<br />
The defaults are (as of [https://elixir.bootlin.com/linux/latest/source/mm/page-writeback.c Linux source code: mm/page-writeback.c (v5.4.6)]): "dirty_ratio = 20%" and "dirty_background_ratio = 10%".<br><br />
Which on modern systems (8GiB RAM) means:<br />
* '''dirty_bytes''' = 8GiB * 20% to GiB = 1717986918 = '''1.6 GiB'''<br />
* '''dirty_background_bytes''' = 8GiB * 10% to GiB = 858993459 = '''0.8 GiB'''<br />
Imagine how long your disk is going to spend writing those when a writeback is triggered.. It'll be chocked and you will experience small periodic system freezes when doing I/O.<br />
<br />
'''Notes'''<br><br />
* Higher values may increase performance (more writable data is buffered in memory), but it also increases the risk of data loss (the data in memory will disappear in case of a crash).<br />
* Setting zero as the value may cause high and/or frequent latency spikes (because all data is written to disk immediately, during which other I/O is disturbed).<br />
* Using a value such as 10% of RAM might be sane if there's around 1GiB of RAM (10% of 1GiB is ~100MiB). On machines with much more RAM (around 16GiB, where 10% of 16GiB is ~1.7GiB), the amount becomes out of proportion, as the writeback takes several seconds to complete. Consider adjusting according to the amount of RAM on a particular system.<br />
<br />
'''More reading'''<br><br />
* [https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ Better Linux Disk Caching & Performance with vm.dirty_ratio & vm.dirty_background_ratio]<br />
<br />
=== VFS cache ===<br />
<br />
Decreasing the VFS (Virtual File System) cache parameter value may improve system responsiveness:<br />
<br />
* {{ic|1=vm.vfs_cache_pressure = 50}}<br />
: The value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects (VFS cache). Lowering it from the default value of 100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may produce out-of-memory conditions).<br />
<br />
== MDADM ==<br />
<br />
When the kernel performs a resync operation of a software raid device it tries not to create a high system load by restricting the speed of the operation. Using sysctl it is possible to change the lower and upper speed limit.<br />
<br />
Set maximum and minimum speed of raid resyncing operations:<br />
<br />
dev.raid.speed_limit_max = 10000<br />
dev.raid.speed_limit_min = 1000<br />
<br />
If mdadm is compiled as a module {{ic|md_mod}}, the above settings are available only after the module has been loaded. If the settings shall be loaded on boot via {{ic|/etc/sysctl.d/}}, the module {{ic|md_mod}} may be loaded beforehand through {{ic|/etc/modules-load.d/}}.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Long system freezes while swapping to disk ===<br />
<br />
Increase {{ic|vm.min_free_kbytes}} to improve desktop responsiveness and reduce long pauses due to swapping to disk. One should increase this to {{ic|installed_mem / num_of_cores * 0.05}}. See [https://askubuntu.com/a/45009] and [https://www.linbit.com/en/kernel-min_free_kbytes/] for more details.<br />
<br />
== See also ==<br />
<br />
* {{man|8|sysctl}} and {{man|5|sysctl.conf}}<br />
* [https://www.kernel.org/doc/Documentation/sysctl/ Linux kernel documentation for /proc/sys/]<br />
* Kernel Documentation: [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt IP Sysctl]<br />
* [http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.html Kernel network parameters for sysctl]<br />
* [https://sysctl-explorer.net sysctl-explorer.net – an initiative to facilitate the access of Linux' sysctl reference documentation]<br />
* [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-disable-source-routing Disable Source Routing - Red Hat Customer Portal]<br />
* [https://www.suse.com/documentation/sles11/book_hardening/data/sec_sec_prot_general_kernel.html SUSE handbook about Security Features in the Kernel]</div>Gimahttps://wiki.archlinux.org/index.php?title=Sysctl&diff=593592Sysctl2019-12-30T17:09:46Z<p>Gima: /* Virtual memory */ Separate "VFS cache" to it's own subheading</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Kernel]]<br />
[[Category:Commands]]<br />
[[ja:Sysctl]]<br />
[[Wikipedia:sysctl|sysctl]] is a tool for examining and changing [[kernel parameters]] at runtime (package {{Pkg|procps-ng}} in [[official repositories]]). sysctl is implemented in [[Wikipedia:procfs|procfs]], the virtual process file system at {{ic|/proc/}}.<br />
<br />
== Configuration ==<br />
<br />
{{Note|From version 207 and 21x, [[systemd]] only applies settings from {{ic|/etc/sysctl.d/*.conf}} and {{ic|/usr/lib/sysctl.d/*.conf}}. If you had customized {{ic|/etc/sysctl.conf}}, you need to rename it as {{ic|/etc/sysctl.d/99-sysctl.conf}}. If you had e.g. {{ic|/etc/sysctl.d/foo}}, you need to rename it to {{ic|/etc/sysctl.d/foo.conf}}.}}<br />
<br />
The '''sysctl''' preload/configuration file can be created at {{ic|/etc/sysctl.d/99-sysctl.conf}}. For [[systemd]], {{ic|/etc/sysctl.d/}} and {{ic|/usr/lib/sysctl.d/}} are drop-in directories for kernel sysctl parameters. The naming and source directory decide the order of processing, which is important since the last parameter processed may override earlier ones. For example, parameters in a {{ic|/usr/lib/sysctl.d/50-default.conf}} will be overriden by equal parameters in {{ic|/etc/sysctl.d/50-default.conf}} and any configuration file processed later from both directories. <br />
<br />
To load all configuration files manually, execute:<br />
<br />
# sysctl --system <br />
<br />
which will also output the applied hierarchy. A single parameter file can also be loaded explicitly with:<br />
<br />
# sysctl --load=''filename.conf''<br />
<br />
See [http://0pointer.de/blog/projects/the-new-configuration-files the new configuration files] and more specifically {{man|5|sysctl.d}} for more information.<br />
<br />
The parameters available are those listed under {{ic|/proc/sys/}}. For example, the {{ic|kernel.sysrq}} parameter refers to the file {{ic|/proc/sys/kernel/sysrq}} on the file system. The {{ic|sysctl --all}} command can be used to display all currently available values.<br />
<br />
{{Note|If you have the kernel documentation installed ({{Pkg|linux-docs}}), you can find detailed information about sysctl settings in {{ic|/usr/lib/modules/$(uname -r)/build/Documentation/sysctl/}}. It is highly recommended reading these before changing sysctl settings.}}<br />
<br />
Settings can be changed through file manipulation or using the {{ic|sysctl}} utility. For example, to temporarily enable the [[Wikipedia:Magic SysRq key|magic SysRq key]]:<br />
<br />
# sysctl kernel.sysrq=1<br />
<br />
or:<br />
<br />
# echo "1" > /proc/sys/kernel/sysrq<br />
<br />
See [https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html Linux kernel documentation] for details about {{ic|kernel.sysrq}}.<br />
<br />
To preserve changes between reboots, add or modify the appropriate lines in {{ic|/etc/sysctl.d/99-sysctl.conf}} or another applicable parameter file in {{ic|/etc/sysctl.d/}}.<br />
<br />
{{Tip|Some parameters that can be applied may depend on kernel modules which in turn might not be loaded. For example parameters in {{ic|/proc/sys/net/bridge/*}} depend on the {{ic|br_netfilter}} module. If it is not loaded at runtime (or after a reboot), those will ''silently'' not be applied. See [[Kernel modules]].}}<br />
<br />
== Security ==<br />
<br />
See [[Security#Kernel hardening]].<br />
<br />
== Networking ==<br />
<br />
=== Improving performance ===<br />
<br />
==== Increasing the size of the receive queue. ====<br />
<br />
The received frames will be stored in this queue after taking them from the ring buffer on the network card.<br />
<br />
Increasing this value for high speed cards may help prevent losing packets:<br />
<br />
net.core.netdev_max_backlog = 100000<br />
net.core.netdev_budget = 50000<br />
net.core.netdev_budget_usecs = 5000<br />
<br />
{{Note|In real time application like SIP routers, this option requires a high speed CPU otherwise the data in the queue will be out of date.}}<br />
<br />
==== Increase the maximum connections ====<br />
<br />
The upper limit on how many connections the kernel will accept (default 128):<br />
<br />
net.core.somaxconn = 1024<br />
<br />
{{Warning|Increasing this value may only increase performance on high-loaded servers and may cause as slow processing rate (e.g. a single threaded blocking server) or insufficient number of worker threads/processes [https://serverfault.com/questions/518862/will-increasing-net-core-somaxconn-make-a-difference/519152].}}<br />
<br />
==== Increase the memory dedicated to the network interfaces ====<br />
<br />
The default the Linux network stack is not configured for high speed large file transfer across WAN links (i.e. handle more network packets) and setting the correct values may save memory resources:<br />
<br />
net.core.rmem_default = 1048576<br />
net.core.rmem_max = 16777216<br />
net.core.wmem_default = 1048576<br />
net.core.wmem_max = 16777216<br />
net.core.optmem_max = 65536<br />
net.ipv4.tcp_rmem = 4096 1048576 2097152<br />
net.ipv4.tcp_wmem = 4096 65536 16777216<br />
<br />
It is also possible increase the default {{ic|4096}} UDP limits:<br />
<br />
net.ipv4.udp_rmem_min = 8192<br />
net.ipv4.udp_wmem_min = 8192<br />
<br />
See the following sources for more information and recommend values:<br />
<br />
* http://www.nateware.com/linux-network-tuning-for-2013.html<br />
* https://blog.cloudflare.com/the-story-of-one-latency-spike/<br />
<br />
==== Enable TCP Fast Open ====<br />
<br />
{{Expansion|Mention the option to "enable all listeners to support Fast Open by default without explicit TCP_FASTOPEN socket option", i.e. value {{ic|1027}} (0x1+0x2+0x400).}}<br />
<br />
TCP Fast Open is an extension to the transmission control protocol (TCP) that helps reduce network latency by enabling data to be exchanged during the sender’s initial TCP SYN [https://www.keycdn.com/support/tcp-fast-open/]. Using the value {{ic|3}} instead of the default {{ic|1}} allows TCP Fast Open for both incoming and outgoing connections:<br />
<br />
net.ipv4.tcp_fastopen = 3<br />
<br />
==== Tweak the pending connection handling ====<br />
<br />
{{ic|tcp_max_syn_backlog}} is the maximum queue length of pending connections 'Waiting Acknowledgment'.<br />
<br />
In the event of a synflood DOS attack, this queue can fill up pretty quickly, at which point [[Wikipedia:SYN cookies|TCP SYN cookies]] will kick in allowing your system to continue to respond to legitimate traffic, and allowing you to gain access to block malicious IPs.<br />
<br />
If the server suffers from overloads at peak times, you may want to increase this value a little bit:<br />
<br />
net.ipv4.tcp_max_syn_backlog = 30000<br />
<br />
{{ic|tcp_max_tw_buckets}} is the maximum number of sockets in TIME_WAIT state.<br />
<br />
After reaching this number the system will start destroying the socket that are in this state.<br />
<br />
Increase this to prevent simple DOS attacks:<br />
<br />
net.ipv4.tcp_max_tw_buckets = 2000000<br />
<br />
{{ic|tcp_tw_reuse}} sets whether TCP should reuse an existing connection in the TIME-WAIT state for a new outgoing connection if the new timestamp is strictly bigger than the most recent timestamp recorded for the previous connection.<br />
<br />
This helps avoid from running out of available network sockets:<br />
<br />
net.ipv4.tcp_tw_reuse = 1<br />
<br />
Specify how many seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification, but required to prevent denial-of-service attacks. In Linux 2.2, the default value was 180 [https://access.redhat.com/solutions/41776]:<br />
<br />
net.ipv4.tcp_fin_timeout = 10<br />
<br />
{{ic|tcp_slow_start_after_idle}} sets whether TCP should start at the default window size only for new connections or also for existing connections that have been idle for too long.<br />
<br />
This setting kills persistent single connection performance and could be turned off:<br />
<br />
net.ipv4.tcp_slow_start_after_idle = 0<br />
<br />
==== Change TCP keepalive parameters ====<br />
<br />
[[Wikipedia:Keepalive#TCP keepalive|TCP keepalive]] is a mechanism for TCP connections that help to determine whether the other end has stopped responding or not. TCP will send the keepalive probe that contains null data to the network peer several times after a period of idle time. If the peer does not respond, the socket will be closed automatically. By default, TCP keepalive process waits for two hours (7200 secs) for socket activity before sending the first keepalive probe, and then resend it every 75 seconds. As long as there is TCP/IP socket communications going on and active, no keepalive packets are needed.<br />
<br />
{{Note|With the following settings, your application will detect dead TCP connections after 120 seconds (60s + 10s + 10s + 10s + 10s + 10s + 10s).}}<br />
<br />
net.ipv4.tcp_keepalive_time = 60<br />
net.ipv4.tcp_keepalive_intvl = 10<br />
net.ipv4.tcp_keepalive_probes = 6<br />
<br />
==== Enable MTU probing ====<br />
<br />
The longer the [[Wikipedia:Maximum transmission unit|maximum transmission unit (MTU)]] the better for performance, but the worse for reliability.<br />
<br />
This is because a lost packet means more data to be retransmitted and because many routers on the Internet cannot deliver very long packets:<br />
<br />
net.ipv4.tcp_mtu_probing = 1<br />
<br />
See https://blog.cloudflare.com/path-mtu-discovery-in-practice/ for more information.<br />
<br />
==== TCP timestamps ====<br />
<br />
{{Warning|TCP timestamps protect against wrapping sequence numbers (at gigabit speeds) and round trip time calculation implemented in TCP. It is not recommended to turn off TCP timestamps as it may cause a security risk [https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf].}}<br />
<br />
Disabling timestamp generation will reduce spikes and may give a performance boost on gigabit networks:<br />
<br />
net.ipv4.tcp_timestamps = 0<br />
<br />
==== Enable BBR ====<br />
<br />
The [[Wikipedia:TCP congestion control#TCP BBR|BBR congestion control algorithm]] can help achieve higher bandwidths and lower latencies for internet traffic.<br />
First, load the {{ic|tcp_bbr}} module.<br />
<br />
net.core.default_qdisc = fq<br />
net.ipv4.tcp_congestion_control = bbr<br />
<br />
=== TCP/IP stack hardening ===<br />
<br />
The following specifies a parameter set to tighten network security options of the kernel for the IPv4 protocol and related IPv6 parameters where an equivalent exists. <br />
<br />
For some use-cases, for example using the system as a [[router]], other parameters may be useful or required as well. <br />
<br />
==== TCP SYN cookie protection ====<br />
<br />
Helps protect against SYN flood attacks. Only kicks in when {{ic|net.ipv4.tcp_max_syn_backlog}} is reached:<br />
<br />
net.ipv4.tcp_syncookies = 1<br />
<br />
==== TCP rfc1337 ====<br />
<br />
{{Accuracy|This does not seem to be part of the TCP standard? The description may not be accurate. [https://serverfault.com/questions/787624/why-isnt-net-ipv4-tcp-rfc1337-enabled-by-default]|section=net.ipv4.tcp_rfc1337}}<br />
<br />
Protect against tcp time-wait assassination hazards, drop RST packets for sockets in the time-wait state. Not widely supported outside of Linux, but conforms to RFC:<br />
<br />
net.ipv4.tcp_rfc1337 = 1<br />
<br />
==== Reverse path filtering ====<br />
<br />
By enabling reverse path filtering, the kernel will do source validation of the packets received from all the interfaces on the machine. This can protect from attackers that are using IP spoofing methods to do harm.<br />
<br />
The kernel's default value is {{ic|0}} (no source validation), but systemd ships {{ic|/usr/lib/sysctl.d/50-default.conf}} that sets {{ic|net.ipv4.conf.all.rp_filter}} to {{ic|2}} (loose mode)[https://github.com/systemd/systemd/pull/10971].<br />
<br />
The following will set the reverse path filtering mechanism to value {{ic|1}} (strict mode):<br />
<br />
net.ipv4.conf.default.rp_filter = 1<br />
net.ipv4.conf.all.rp_filter = 1<br />
<br />
The relationship and behavior of {{ic|net.ipv4.conf.default.*}}, {{ic|net.ipv4.conf.''interface''.*}} and {{ic|net.ipv4.conf.all.*}} is explained in [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ip-sysctl.txt].<br />
<br />
==== Log martian packets ====<br />
<br />
A [[Wikipedia:Martian packet|martian packet]] is an IP packet which specifies a source or destination address that is reserved for special-use by Internet Assigned Numbers Authority (IANA). See [[wikipedia:Reserved_IP_addresses|Reserved IP addresses]] for more details.<br />
<br />
Often martian and unroutable packet may be used for a dangerous purpose. Logging these packets for further inspection may be useful [https://www.cyberciti.biz/faq/linux-log-suspicious-martian-packets-un-routable-source-addresses/]:<br />
<br />
net.ipv4.conf.default.log_martians = 1<br />
net.ipv4.conf.all.log_martians = 1<br />
<br />
{{Note|This can fill up your logs with a lot of information, it is advisable to only enable this for testing.}}<br />
<br />
==== Disable ICMP redirects ====<br />
<br />
To disable ICMP redirect acceptance:<br />
<br />
net.ipv4.conf.all.accept_redirects = 0<br />
net.ipv4.conf.default.accept_redirects = 0<br />
net.ipv4.conf.all.secure_redirects = 0<br />
net.ipv4.conf.default.secure_redirects = 0<br />
net.ipv6.conf.all.accept_redirects = 0<br />
net.ipv6.conf.default.accept_redirects = 0<br />
<br />
To disable ICMP redirect sending when on a non router:<br />
<br />
net.ipv4.conf.all.send_redirects = 0<br />
net.ipv4.conf.default.send_redirects = 0<br />
<br />
==== Ignore ICMP echo requests ====<br />
<br />
To disable ICMP echo (aka ping) requests:<br />
<br />
net.ipv4.icmp_echo_ignore_all = 1<br />
net.ipv6.icmp.echo_ignore_all = 1<br />
<br />
{{Note|Beware this may cause issues with monitoring tools and/or applications relying on ICMP echo responses.}}<br />
<br />
=== Other ===<br />
<br />
==== Allow unprivileged users to create IPPROTO_ICMP sockets ====<br />
<br />
The IPPROTO_ICMP socket type adds the possibility to send ICMP_ECHO messages and receive corresponding ICMP_ECHOREPLY messages without the need to open a raw socket, an operation which requires the CAP_NET_RAW capability or the SUID bit with a proper privileged owner. These ICMP_ECHO messages are sent by the ping application thus making the IPPROTO_ICMP socket also known as ping socket in addition to ICMP Echo socket.<br />
<br />
{{ic|ping_group_range}} determines the GID range of groups which their users are allowed to create IPPROTO_ICMP sockets. Additionally, the owner of the CAP_NET_RAW capability is also allowed to create IPPROTO_ICMP sockets.<br/>By default this range is {{ic|1 0}} which means no one is allowed to create IPPROTO_ICMP sockets except root.<br/><br />
To take advantage of this setting programs which currently uses raw sockets need to ported to use IPPROTO_ICMP sockets instead.<br/>For example, QEMU uses IPPROTO_ICMP for SLIRP aka User-mode networking, so allowing the user running QEMU to create IPPROTO_ICMP sockets means it's possible to ping from the guest.<br />
<br />
To allow only users which are members of the group with GID 100 to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 100 100<br />
<br />
To allow all the users in the system to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 0 65535<br />
<br />
== Virtual memory ==<br />
<br />
There are several key parameters to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the write out of dirty data to disk. See the official [https://www.kernel.org/doc/Documentation/sysctl/vm.txt Linux kernel documentation] for more information. For example:<br />
<br />
* {{ic|1=vm.dirty_ratio = 10}}<br />
: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which a process which is generating disk writes will itself start writing out dirty data.<br />
<br />
* {{ic|1=vm.dirty_background_ratio = 5}}<br />
: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data.<br />
<br />
As noted in the comments for the parameters, one needs to consider the total amount of RAM when setting these values. For example, simplifying by taking the installed system RAM instead of available memory:<br />
<br />
{{Warning|<br />
* Higher ratio values may increase performance, it also increases the risk of data loss.<br />
* Setting this value to {{ic|0}} may cause higher latency on disks and spikes.<br />
See https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ for more information.<br />
}}<br />
<br />
* Consensus is that setting {{ic|vm.dirty_ratio}} to 10% of RAM is a sane value if RAM is say 1 GB (so 10% is {{#expr: 1000/10 round 0}} MB). But if the machine has much more RAM, say 16 GB (10% is {{#expr: 16/10 round 1 }} GB), the percentage may be out of proportion as it becomes several seconds of writeback on spinning disks. A more sane value in this case may be {{ic|3}} (3% of 16 GB is approximately 491 MB).<br />
* Similarly, setting {{ic|vm.dirty_background_ratio}} to {{ic|5}} may be just fine for small memory values, but again, consider and adjust accordingly for the amount of RAM on a particular system.<br />
<br />
=== VFS cache ===<br />
<br />
Decreasing the VFS (Virtual File System) cache parameter value may improve system responsiveness:<br />
<br />
* {{ic|1=vm.vfs_cache_pressure = 50}}<br />
: The value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects (VFS cache). Lowering it from the default value of 100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may produce out-of-memory conditions).<br />
<br />
== MDADM ==<br />
<br />
When the kernel performs a resync operation of a software raid device it tries not to create a high system load by restricting the speed of the operation. Using sysctl it is possible to change the lower and upper speed limit.<br />
<br />
Set maximum and minimum speed of raid resyncing operations:<br />
<br />
dev.raid.speed_limit_max = 10000<br />
dev.raid.speed_limit_min = 1000<br />
<br />
If mdadm is compiled as a module {{ic|md_mod}}, the above settings are available only after the module has been loaded. If the settings shall be loaded on boot via {{ic|/etc/sysctl.d/}}, the module {{ic|md_mod}} may be loaded beforehand through {{ic|/etc/modules-load.d/}}.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Small periodic system freezes ===<br />
<br />
Set dirty bytes to small enough value (for example 4M):<br />
<br />
vm.dirty_background_bytes = 4194304<br />
vm.dirty_bytes = 4194304<br />
<br />
{{Note|The {{ic|dirty_background_bytes}} and {{ic|dirty_bytes}} parameters are counterparts of {{ic|dirty_background_ratio}} and {{ic|dirty_ratio}} (as seen in [[#Virtual memory]]). Only one of the parameters may be specified at a time.}}<br />
<br />
=== Long system freezes while swapping to disk ===<br />
<br />
Increase {{ic|vm.min_free_kbytes}} to improve desktop responsiveness and reduce long pauses due to swapping to disk. One should increase this to {{ic|installed_mem / num_of_cores * 0.05}}. See [https://askubuntu.com/a/45009] and [https://www.linbit.com/en/kernel-min_free_kbytes/] for more details.<br />
<br />
== See also ==<br />
<br />
* {{man|8|sysctl}} and {{man|5|sysctl.conf}}<br />
* [https://www.kernel.org/doc/Documentation/sysctl/ Linux kernel documentation for /proc/sys/]<br />
* Kernel Documentation: [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt IP Sysctl]<br />
* [http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.html Kernel network parameters for sysctl]<br />
* [https://sysctl-explorer.net sysctl-explorer.net – an initiative to facilitate the access of Linux' sysctl reference documentation]<br />
* [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-disable-source-routing Disable Source Routing - Red Hat Customer Portal]<br />
* [https://www.suse.com/documentation/sles11/book_hardening/data/sec_sec_prot_general_kernel.html SUSE handbook about Security Features in the Kernel]</div>Gimahttps://wiki.archlinux.org/index.php?title=Sysctl&diff=593591Sysctl2019-12-30T16:18:28Z<p>Gima: /* Troubleshooting */ remov note about "io_delay_type", because it's about lowlevel hw access, not kernel performance tuning, src: https://lwn.net/Articles/263418/</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Kernel]]<br />
[[Category:Commands]]<br />
[[ja:Sysctl]]<br />
[[Wikipedia:sysctl|sysctl]] is a tool for examining and changing [[kernel parameters]] at runtime (package {{Pkg|procps-ng}} in [[official repositories]]). sysctl is implemented in [[Wikipedia:procfs|procfs]], the virtual process file system at {{ic|/proc/}}.<br />
<br />
== Configuration ==<br />
<br />
{{Note|From version 207 and 21x, [[systemd]] only applies settings from {{ic|/etc/sysctl.d/*.conf}} and {{ic|/usr/lib/sysctl.d/*.conf}}. If you had customized {{ic|/etc/sysctl.conf}}, you need to rename it as {{ic|/etc/sysctl.d/99-sysctl.conf}}. If you had e.g. {{ic|/etc/sysctl.d/foo}}, you need to rename it to {{ic|/etc/sysctl.d/foo.conf}}.}}<br />
<br />
The '''sysctl''' preload/configuration file can be created at {{ic|/etc/sysctl.d/99-sysctl.conf}}. For [[systemd]], {{ic|/etc/sysctl.d/}} and {{ic|/usr/lib/sysctl.d/}} are drop-in directories for kernel sysctl parameters. The naming and source directory decide the order of processing, which is important since the last parameter processed may override earlier ones. For example, parameters in a {{ic|/usr/lib/sysctl.d/50-default.conf}} will be overriden by equal parameters in {{ic|/etc/sysctl.d/50-default.conf}} and any configuration file processed later from both directories. <br />
<br />
To load all configuration files manually, execute:<br />
<br />
# sysctl --system <br />
<br />
which will also output the applied hierarchy. A single parameter file can also be loaded explicitly with:<br />
<br />
# sysctl --load=''filename.conf''<br />
<br />
See [http://0pointer.de/blog/projects/the-new-configuration-files the new configuration files] and more specifically {{man|5|sysctl.d}} for more information.<br />
<br />
The parameters available are those listed under {{ic|/proc/sys/}}. For example, the {{ic|kernel.sysrq}} parameter refers to the file {{ic|/proc/sys/kernel/sysrq}} on the file system. The {{ic|sysctl --all}} command can be used to display all currently available values.<br />
<br />
{{Note|If you have the kernel documentation installed ({{Pkg|linux-docs}}), you can find detailed information about sysctl settings in {{ic|/usr/lib/modules/$(uname -r)/build/Documentation/sysctl/}}. It is highly recommended reading these before changing sysctl settings.}}<br />
<br />
Settings can be changed through file manipulation or using the {{ic|sysctl}} utility. For example, to temporarily enable the [[Wikipedia:Magic SysRq key|magic SysRq key]]:<br />
<br />
# sysctl kernel.sysrq=1<br />
<br />
or:<br />
<br />
# echo "1" > /proc/sys/kernel/sysrq<br />
<br />
See [https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html Linux kernel documentation] for details about {{ic|kernel.sysrq}}.<br />
<br />
To preserve changes between reboots, add or modify the appropriate lines in {{ic|/etc/sysctl.d/99-sysctl.conf}} or another applicable parameter file in {{ic|/etc/sysctl.d/}}.<br />
<br />
{{Tip|Some parameters that can be applied may depend on kernel modules which in turn might not be loaded. For example parameters in {{ic|/proc/sys/net/bridge/*}} depend on the {{ic|br_netfilter}} module. If it is not loaded at runtime (or after a reboot), those will ''silently'' not be applied. See [[Kernel modules]].}}<br />
<br />
== Security ==<br />
<br />
See [[Security#Kernel hardening]].<br />
<br />
== Networking ==<br />
<br />
=== Improving performance ===<br />
<br />
==== Increasing the size of the receive queue. ====<br />
<br />
The received frames will be stored in this queue after taking them from the ring buffer on the network card.<br />
<br />
Increasing this value for high speed cards may help prevent losing packets:<br />
<br />
net.core.netdev_max_backlog = 100000<br />
net.core.netdev_budget = 50000<br />
net.core.netdev_budget_usecs = 5000<br />
<br />
{{Note|In real time application like SIP routers, this option requires a high speed CPU otherwise the data in the queue will be out of date.}}<br />
<br />
==== Increase the maximum connections ====<br />
<br />
The upper limit on how many connections the kernel will accept (default 128):<br />
<br />
net.core.somaxconn = 1024<br />
<br />
{{Warning|Increasing this value may only increase performance on high-loaded servers and may cause as slow processing rate (e.g. a single threaded blocking server) or insufficient number of worker threads/processes [https://serverfault.com/questions/518862/will-increasing-net-core-somaxconn-make-a-difference/519152].}}<br />
<br />
==== Increase the memory dedicated to the network interfaces ====<br />
<br />
The default the Linux network stack is not configured for high speed large file transfer across WAN links (i.e. handle more network packets) and setting the correct values may save memory resources:<br />
<br />
net.core.rmem_default = 1048576<br />
net.core.rmem_max = 16777216<br />
net.core.wmem_default = 1048576<br />
net.core.wmem_max = 16777216<br />
net.core.optmem_max = 65536<br />
net.ipv4.tcp_rmem = 4096 1048576 2097152<br />
net.ipv4.tcp_wmem = 4096 65536 16777216<br />
<br />
It is also possible increase the default {{ic|4096}} UDP limits:<br />
<br />
net.ipv4.udp_rmem_min = 8192<br />
net.ipv4.udp_wmem_min = 8192<br />
<br />
See the following sources for more information and recommend values:<br />
<br />
* http://www.nateware.com/linux-network-tuning-for-2013.html<br />
* https://blog.cloudflare.com/the-story-of-one-latency-spike/<br />
<br />
==== Enable TCP Fast Open ====<br />
<br />
{{Expansion|Mention the option to "enable all listeners to support Fast Open by default without explicit TCP_FASTOPEN socket option", i.e. value {{ic|1027}} (0x1+0x2+0x400).}}<br />
<br />
TCP Fast Open is an extension to the transmission control protocol (TCP) that helps reduce network latency by enabling data to be exchanged during the sender’s initial TCP SYN [https://www.keycdn.com/support/tcp-fast-open/]. Using the value {{ic|3}} instead of the default {{ic|1}} allows TCP Fast Open for both incoming and outgoing connections:<br />
<br />
net.ipv4.tcp_fastopen = 3<br />
<br />
==== Tweak the pending connection handling ====<br />
<br />
{{ic|tcp_max_syn_backlog}} is the maximum queue length of pending connections 'Waiting Acknowledgment'.<br />
<br />
In the event of a synflood DOS attack, this queue can fill up pretty quickly, at which point [[Wikipedia:SYN cookies|TCP SYN cookies]] will kick in allowing your system to continue to respond to legitimate traffic, and allowing you to gain access to block malicious IPs.<br />
<br />
If the server suffers from overloads at peak times, you may want to increase this value a little bit:<br />
<br />
net.ipv4.tcp_max_syn_backlog = 30000<br />
<br />
{{ic|tcp_max_tw_buckets}} is the maximum number of sockets in TIME_WAIT state.<br />
<br />
After reaching this number the system will start destroying the socket that are in this state.<br />
<br />
Increase this to prevent simple DOS attacks:<br />
<br />
net.ipv4.tcp_max_tw_buckets = 2000000<br />
<br />
{{ic|tcp_tw_reuse}} sets whether TCP should reuse an existing connection in the TIME-WAIT state for a new outgoing connection if the new timestamp is strictly bigger than the most recent timestamp recorded for the previous connection.<br />
<br />
This helps avoid from running out of available network sockets:<br />
<br />
net.ipv4.tcp_tw_reuse = 1<br />
<br />
Specify how many seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification, but required to prevent denial-of-service attacks. In Linux 2.2, the default value was 180 [https://access.redhat.com/solutions/41776]:<br />
<br />
net.ipv4.tcp_fin_timeout = 10<br />
<br />
{{ic|tcp_slow_start_after_idle}} sets whether TCP should start at the default window size only for new connections or also for existing connections that have been idle for too long.<br />
<br />
This setting kills persistent single connection performance and could be turned off:<br />
<br />
net.ipv4.tcp_slow_start_after_idle = 0<br />
<br />
==== Change TCP keepalive parameters ====<br />
<br />
[[Wikipedia:Keepalive#TCP keepalive|TCP keepalive]] is a mechanism for TCP connections that help to determine whether the other end has stopped responding or not. TCP will send the keepalive probe that contains null data to the network peer several times after a period of idle time. If the peer does not respond, the socket will be closed automatically. By default, TCP keepalive process waits for two hours (7200 secs) for socket activity before sending the first keepalive probe, and then resend it every 75 seconds. As long as there is TCP/IP socket communications going on and active, no keepalive packets are needed.<br />
<br />
{{Note|With the following settings, your application will detect dead TCP connections after 120 seconds (60s + 10s + 10s + 10s + 10s + 10s + 10s).}}<br />
<br />
net.ipv4.tcp_keepalive_time = 60<br />
net.ipv4.tcp_keepalive_intvl = 10<br />
net.ipv4.tcp_keepalive_probes = 6<br />
<br />
==== Enable MTU probing ====<br />
<br />
The longer the [[Wikipedia:Maximum transmission unit|maximum transmission unit (MTU)]] the better for performance, but the worse for reliability.<br />
<br />
This is because a lost packet means more data to be retransmitted and because many routers on the Internet cannot deliver very long packets:<br />
<br />
net.ipv4.tcp_mtu_probing = 1<br />
<br />
See https://blog.cloudflare.com/path-mtu-discovery-in-practice/ for more information.<br />
<br />
==== TCP timestamps ====<br />
<br />
{{Warning|TCP timestamps protect against wrapping sequence numbers (at gigabit speeds) and round trip time calculation implemented in TCP. It is not recommended to turn off TCP timestamps as it may cause a security risk [https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf].}}<br />
<br />
Disabling timestamp generation will reduce spikes and may give a performance boost on gigabit networks:<br />
<br />
net.ipv4.tcp_timestamps = 0<br />
<br />
==== Enable BBR ====<br />
<br />
The [[Wikipedia:TCP congestion control#TCP BBR|BBR congestion control algorithm]] can help achieve higher bandwidths and lower latencies for internet traffic.<br />
First, load the {{ic|tcp_bbr}} module.<br />
<br />
net.core.default_qdisc = fq<br />
net.ipv4.tcp_congestion_control = bbr<br />
<br />
=== TCP/IP stack hardening ===<br />
<br />
The following specifies a parameter set to tighten network security options of the kernel for the IPv4 protocol and related IPv6 parameters where an equivalent exists. <br />
<br />
For some use-cases, for example using the system as a [[router]], other parameters may be useful or required as well. <br />
<br />
==== TCP SYN cookie protection ====<br />
<br />
Helps protect against SYN flood attacks. Only kicks in when {{ic|net.ipv4.tcp_max_syn_backlog}} is reached:<br />
<br />
net.ipv4.tcp_syncookies = 1<br />
<br />
==== TCP rfc1337 ====<br />
<br />
{{Accuracy|This does not seem to be part of the TCP standard? The description may not be accurate. [https://serverfault.com/questions/787624/why-isnt-net-ipv4-tcp-rfc1337-enabled-by-default]|section=net.ipv4.tcp_rfc1337}}<br />
<br />
Protect against tcp time-wait assassination hazards, drop RST packets for sockets in the time-wait state. Not widely supported outside of Linux, but conforms to RFC:<br />
<br />
net.ipv4.tcp_rfc1337 = 1<br />
<br />
==== Reverse path filtering ====<br />
<br />
By enabling reverse path filtering, the kernel will do source validation of the packets received from all the interfaces on the machine. This can protect from attackers that are using IP spoofing methods to do harm.<br />
<br />
The kernel's default value is {{ic|0}} (no source validation), but systemd ships {{ic|/usr/lib/sysctl.d/50-default.conf}} that sets {{ic|net.ipv4.conf.all.rp_filter}} to {{ic|2}} (loose mode)[https://github.com/systemd/systemd/pull/10971].<br />
<br />
The following will set the reverse path filtering mechanism to value {{ic|1}} (strict mode):<br />
<br />
net.ipv4.conf.default.rp_filter = 1<br />
net.ipv4.conf.all.rp_filter = 1<br />
<br />
The relationship and behavior of {{ic|net.ipv4.conf.default.*}}, {{ic|net.ipv4.conf.''interface''.*}} and {{ic|net.ipv4.conf.all.*}} is explained in [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt ip-sysctl.txt].<br />
<br />
==== Log martian packets ====<br />
<br />
A [[Wikipedia:Martian packet|martian packet]] is an IP packet which specifies a source or destination address that is reserved for special-use by Internet Assigned Numbers Authority (IANA). See [[wikipedia:Reserved_IP_addresses|Reserved IP addresses]] for more details.<br />
<br />
Often martian and unroutable packet may be used for a dangerous purpose. Logging these packets for further inspection may be useful [https://www.cyberciti.biz/faq/linux-log-suspicious-martian-packets-un-routable-source-addresses/]:<br />
<br />
net.ipv4.conf.default.log_martians = 1<br />
net.ipv4.conf.all.log_martians = 1<br />
<br />
{{Note|This can fill up your logs with a lot of information, it is advisable to only enable this for testing.}}<br />
<br />
==== Disable ICMP redirects ====<br />
<br />
To disable ICMP redirect acceptance:<br />
<br />
net.ipv4.conf.all.accept_redirects = 0<br />
net.ipv4.conf.default.accept_redirects = 0<br />
net.ipv4.conf.all.secure_redirects = 0<br />
net.ipv4.conf.default.secure_redirects = 0<br />
net.ipv6.conf.all.accept_redirects = 0<br />
net.ipv6.conf.default.accept_redirects = 0<br />
<br />
To disable ICMP redirect sending when on a non router:<br />
<br />
net.ipv4.conf.all.send_redirects = 0<br />
net.ipv4.conf.default.send_redirects = 0<br />
<br />
==== Ignore ICMP echo requests ====<br />
<br />
To disable ICMP echo (aka ping) requests:<br />
<br />
net.ipv4.icmp_echo_ignore_all = 1<br />
net.ipv6.icmp.echo_ignore_all = 1<br />
<br />
{{Note|Beware this may cause issues with monitoring tools and/or applications relying on ICMP echo responses.}}<br />
<br />
=== Other ===<br />
<br />
==== Allow unprivileged users to create IPPROTO_ICMP sockets ====<br />
<br />
The IPPROTO_ICMP socket type adds the possibility to send ICMP_ECHO messages and receive corresponding ICMP_ECHOREPLY messages without the need to open a raw socket, an operation which requires the CAP_NET_RAW capability or the SUID bit with a proper privileged owner. These ICMP_ECHO messages are sent by the ping application thus making the IPPROTO_ICMP socket also known as ping socket in addition to ICMP Echo socket.<br />
<br />
{{ic|ping_group_range}} determines the GID range of groups which their users are allowed to create IPPROTO_ICMP sockets. Additionally, the owner of the CAP_NET_RAW capability is also allowed to create IPPROTO_ICMP sockets.<br/>By default this range is {{ic|1 0}} which means no one is allowed to create IPPROTO_ICMP sockets except root.<br/><br />
To take advantage of this setting programs which currently uses raw sockets need to ported to use IPPROTO_ICMP sockets instead.<br/>For example, QEMU uses IPPROTO_ICMP for SLIRP aka User-mode networking, so allowing the user running QEMU to create IPPROTO_ICMP sockets means it's possible to ping from the guest.<br />
<br />
To allow only users which are members of the group with GID 100 to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 100 100<br />
<br />
To allow all the users in the system to create IPPROTO_ICMP sockets:<br />
<br />
net.ipv4.ping_group_range = 0 65535<br />
<br />
== Virtual memory ==<br />
<br />
There are several key parameters to tune the operation of the virtual memory (VM) subsystem of the Linux kernel and the write out of dirty data to disk. See the official [https://www.kernel.org/doc/Documentation/sysctl/vm.txt Linux kernel documentation] for more information. For example:<br />
<br />
* {{ic|1=vm.dirty_ratio = 10}}<br />
: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which a process which is generating disk writes will itself start writing out dirty data.<br />
<br />
* {{ic|1=vm.dirty_background_ratio = 5}}<br />
: Contains, as a percentage of total available memory that contains free pages and reclaimable pages, the number of pages at which the background kernel flusher threads will start writing out dirty data.<br />
<br />
As noted in the comments for the parameters, one needs to consider the total amount of RAM when setting these values. For example, simplifying by taking the installed system RAM instead of available memory:<br />
<br />
{{Warning|<br />
* Higher ratio values may increase performance, it also increases the risk of data loss.<br />
* Setting this value to {{ic|0}} may cause higher latency on disks and spikes.<br />
See https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/ for more information.<br />
}}<br />
<br />
* Consensus is that setting {{ic|vm.dirty_ratio}} to 10% of RAM is a sane value if RAM is say 1 GB (so 10% is {{#expr: 1000/10 round 0}} MB). But if the machine has much more RAM, say 16 GB (10% is {{#expr: 16/10 round 1 }} GB), the percentage may be out of proportion as it becomes several seconds of writeback on spinning disks. A more sane value in this case may be {{ic|3}} (3% of 16 GB is approximately 491 MB).<br />
* Similarly, setting {{ic|vm.dirty_background_ratio}} to {{ic|5}} may be just fine for small memory values, but again, consider and adjust accordingly for the amount of RAM on a particular system.<br />
<br />
Decreasing the VFS cache parameter value may improve system responsiveness:<br />
<br />
* {{ic|1=vm.vfs_cache_pressure = 50}}<br />
: The value controls the tendency of the kernel to reclaim the memory which is used for caching of directory and inode objects (VFS cache). Lowering it from the default value of 100 makes the kernel less inclined to reclaim VFS cache (do not set it to 0, this may produce out-of-memory conditions).<br />
<br />
== MDADM ==<br />
<br />
When the kernel performs a resync operation of a software raid device it tries not to create a high system load by restricting the speed of the operation. Using sysctl it is possible to change the lower and upper speed limit.<br />
<br />
Set maximum and minimum speed of raid resyncing operations:<br />
<br />
dev.raid.speed_limit_max = 10000<br />
dev.raid.speed_limit_min = 1000<br />
<br />
If mdadm is compiled as a module {{ic|md_mod}}, the above settings are available only after the module has been loaded. If the settings shall be loaded on boot via {{ic|/etc/sysctl.d/}}, the module {{ic|md_mod}} may be loaded beforehand through {{ic|/etc/modules-load.d/}}.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Small periodic system freezes ===<br />
<br />
Set dirty bytes to small enough value (for example 4M):<br />
<br />
vm.dirty_background_bytes = 4194304<br />
vm.dirty_bytes = 4194304<br />
<br />
{{Note|The {{ic|dirty_background_bytes}} and {{ic|dirty_bytes}} parameters are counterparts of {{ic|dirty_background_ratio}} and {{ic|dirty_ratio}} (as seen in [[#Virtual memory]]). Only one of the parameters may be specified at a time.}}<br />
<br />
=== Long system freezes while swapping to disk ===<br />
<br />
Increase {{ic|vm.min_free_kbytes}} to improve desktop responsiveness and reduce long pauses due to swapping to disk. One should increase this to {{ic|installed_mem / num_of_cores * 0.05}}. See [https://askubuntu.com/a/45009] and [https://www.linbit.com/en/kernel-min_free_kbytes/] for more details.<br />
<br />
== See also ==<br />
<br />
* {{man|8|sysctl}} and {{man|5|sysctl.conf}}<br />
* [https://www.kernel.org/doc/Documentation/sysctl/ Linux kernel documentation for /proc/sys/]<br />
* Kernel Documentation: [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt IP Sysctl]<br />
* [http://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.kernel.html Kernel network parameters for sysctl]<br />
* [https://sysctl-explorer.net sysctl-explorer.net – an initiative to facilitate the access of Linux' sysctl reference documentation]<br />
* [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/security_guide/sect-security_guide-server_security-disable-source-routing Disable Source Routing - Red Hat Customer Portal]<br />
* [https://www.suse.com/documentation/sles11/book_hardening/data/sec_sec_prot_general_kernel.html SUSE handbook about Security Features in the Kernel]</div>Gimahttps://wiki.archlinux.org/index.php?title=Talk:Sysctl&diff=593588Talk:Sysctl2019-12-30T15:00:49Z<p>Gima: /* Troubleshooting: Small periodic system freezes */ add note about "io_delay_type"</p>
<hr />
<div>== net.ipv4.tcp_rfc1337 ==<br />
<br />
From [https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt kernel doc]:<br />
<br />
{{bc|<br />
tcp_rfc1337 - BOOLEAN<br />
If set, the TCP stack behaves conforming to RFC1337. If unset,<br />
we are not conforming to RFC, but prevent TCP TIME_WAIT<br />
assassination.<br />
Default: 0<br />
}}<br />
<br />
So, isn't {{ic|0}} the safe value? Our wiki says otherwise. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 08:56, 17 September 2013 (UTC)<br />
:With setting {{ic|0}} the system would 'assassinate' a socket in time_wait prematurely upon receiving a RST. While this might sound like a good idea (it frees up a socket quicker), it opens the door for tcp sequence problems/syn replay. Those problems were described in RFC1337 and enabling the setting {{ic|1}} is one way to deal with them (letting TIME_WAIT packets idle out even if a reset is received, so that the sequence number cannot be reused meanwhile). The wiki is correct in my view. <s>Kernel doc is wrong here - "prevent" should read "enable".</s> --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 21:12, 17 September 2013 (UTC)<br />
<br />
:Since this discussion is still open: An interesting attack to the kernels implementation of the related RFC5961 was published yesterday under [https://www.usenix.org/conference/usenixsecurity16/technical-sessions/presentation/cao cve2016-5696]. I have not looked into it enough to form an opinion whether leaving default {{ic|0}} or {{ic|1}} for this setting makes any difference to that, but it is exactly the kind of sequencing attack I was referring to three years back. --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 08:38, 11 August 2016 (UTC)<br />
<br />
::Any news about this? Does anyone already performed more research and analysis about this? [[User:Timofonic|Timofonic]] ([[User talk:Timofonic|talk]]) 17:51, 31 July 2017 (UTC)<br />
<br />
== Virtual memory ==<br />
<br />
The official documentation states that these two variables "Contain[s], as a percentage of total available memory that contains free pages<br />
and reclaimable pages,..." and that "The total available memory is not equal to total system memory.". However the comment underneath talks about them as if they were a percentage of ''system'' memory, making it quite confusing, e.g. I have 6GiB of system memory but only 1-2GiB available.<br />
<br />
Also the defaults seem to have changed, I have {{ic|1=dirty_ratio=50}} and {{ic|1=dirty_background_ratio=20}}.<br />
<br />
-- [[User:DoctorJellyface|DoctorJellyface]] ([[User talk:DoctorJellyface|talk]]) 08:27, 8 August 2016 (UTC)<br />
<br />
:Yes, I agree. When I changed the section a little with [https://wiki.archlinux.org/index.php?title=Sysctl&type=revision&diff=435336&oldid=422169], I left the comment. The reason was that while it simplifies in current form, expanding it to show the difference between system memory and available memory first and only then calculate the percentages makes it cumbersome/complicated to follow. If you have an idea how to do it, please go ahead. --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 09:07, 8 August 2016 (UTC)<br />
:: I would like this to be explained as an "introduction" to both concepts to avoid missconfiguration. I somewhat think to understand it, but I have some caveats about it (available memory after booting pre or post systemd? available memory while using the system? etc.). Despite there may be documents explaining it, it would make the document more friendly to read. Of course, there can be links to more specific documents to know more about it. [[User:Timofonic|Timofonic]] ([[User talk:Timofonic|talk]]) 18:14, 31 July 2017 (UTC)<br />
<br />
::The problem is that the kernel docs don't explain what does "available memory" really mean. Assuming that it changes similarly to what {{ic|free}} shows, taking the system memory instead is still useful to prepare for the worst case. -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 09:11, 8 August 2016 (UTC)<br />
<br />
:::Yes, worst case also because "available" should grow disproportionately, because some slices, like system memory reserved for the bios or GPU will not change, regardless of total installed ram. I've had my go with [https://wiki.archlinux.org/index.php?title=Sysctl&type=revision&diff=445976&oldid=445672]. --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 07:54, 9 August 2016 (UTC)<br />
:::: I'm still not sure about certain parameters: Default examples are provided, but not all of them are explained about why these numbers are used and how it can be calculated on different use cases. I may be wrong, but this article should provide a more comprehensive and pedagogically explanation of each concept compared to the Linux kernel documentation (that I assume is more focused on developers), explaining each of the best "default" values and how to tune them depending on system usage. From my limited perspective, I would like each parameter taking in mind different types of systems: desktop, (average, low latency at interactive operations, low latency at interactive operations and taking into account intensive software (COMPILING OVER GCC/LLVM/WHATEVER, tons of windows/tabs in a web browser, big apps done in interpreter/bytecode such as Python/Java/Mono, etc)), Server (Server with some interactivity: Providing HTPC features). I have no more ideas but just lots of questions, sorry. I hope someone with more knowledge is able to discuss this and provide some more explained information at least. Thanks for your efforts, Arch community is a great place to be [[User:Timofonic|Timofonic]] ([[User talk:Timofonic|talk]]) 18:14, 31 July 2017 (UTC)<br />
<br />
== added vfs_cache_pressure parameter ==<br />
<br />
let me know if it's OK<br />
--[[User:NTia89|nTia89]] ([[User talk:NTia89|talk]]) 18:15, 26 December 2016 (UTC)<br />
<br />
:Fine with me. Cheers. --[[User:Indigo|Indigo]] ([[User talk:Indigo|talk]]) 15:05, 27 December 2016 (UTC)<br />
: That's okay, thanks a lot for adding it. But why did you choose 60 as parameter? Where's the logic behind this? May it be changed depending on certain situations of usage of the system? And if yes, how to be sure to change it in a correct way? [[User:Timofonic|Timofonic]] ([[User talk:Timofonic|talk]]) 18:18, 31 July 2017 (UTC)<br />
<br />
<br />
== Troubleshooting: Small periodic system freezes ==<br />
<br />
This is something happens eventually on my system, specially when having a considerable amounts of tabs opened in different window (50-70 on 4-5 windows, for example). <br />
<br />
- Dirty bytes: Why using the 4M value? Are there an explanation about this? Can it be fine tuned? What does it mean?<br />
- Change kernel.io_delay_type: There's a list of different ones, but zero exmplanation about them. What does it mean each one? How it can change the behaviour of the system? How to find what can be the best one for the system?<br />
<br />
Sorry for asking to much, I'm trying to understand certain concepts that are still difficult for me. I'm sorry if there's already good sources about them, I was unable to locate these. Thanks for your patience.<br />
<br />
[[User:Timofonic|Timofonic]] ([[User talk:Timofonic|talk]]) 18:27, 31 July 2017 (UTC)<br />
<br />
: About the "io_delay_type". It apparently has something to do with hardware accesses and nothing to do with kernel stuff.<br />
: * [https://lwn.net/Articles/263418/ LWN: x86: provide a DMI based port 0x80 I/O delay override]<br />
: * https://elixir.bootlin.com/linux/v5.5-rc4/source/arch/x86/kernel/io_delay.c<br />
:<br />
: [[User:Gima|Gima]] ([[User talk:Gima|talk]]) 15:00, 30 December 2019 (UTC)<br />
<br />
== <s>/etc/sysctl.conf</s> ==<br />
<br />
The Wiki says:<br />
Note: From version 207 and 21x, systemd only applies settings from /etc/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf. If you had customized /etc/sysctl.conf, you need to rename it as /etc/sysctl.d/99-sysctl.conf. If you had e.g. /etc/sysctl.d/foo, you need to rename it to /etc/sysctl.d/foo.conf.<br />
<br />
But:<br />
$ sudo sysctl --system <br />
* Applying /usr/lib/sysctl.d/10-arch.conf ...<br />
fs.inotify.max_user_instances = 1024<br />
fs.inotify.max_user_watches = 524288<br />
* Applying /usr/lib/sysctl.d/50-coredump.conf ...<br />
kernel.core_pattern = |/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h<br />
* Applying /usr/lib/sysctl.d/50-default.conf ...<br />
kernel.sysrq = 16<br />
kernel.core_uses_pid = 1<br />
net.ipv4.conf.all.rp_filter = 2<br />
net.ipv4.conf.all.accept_source_route = 0<br />
net.ipv4.conf.all.promote_secondaries = 1<br />
net.core.default_qdisc = fq_codel<br />
fs.protected_hardlinks = 1<br />
fs.protected_symlinks = 1<br />
fs.protected_regular = 1<br />
fs.protected_fifos = 1<br />
* Applying /usr/lib/sysctl.d/50-pid-max.conf ...<br />
kernel.pid_max = 4194304<br />
* Applying /etc/sysctl.conf ...<br />
net.ipv4.conf.default.rp_filter = 1<br />
net.ipv4.conf.all.rp_filter = 1<br />
<br />
$ sysctl net.ipv4.conf.all.rp_filter<br />
net.ipv4.conf.all.rp_filter = 1<br />
<br />
So, /etc/sysctl.conf is applied correctly<br />
<br />
[[User:Bird-or-cage|Bird-or-cage]] ([[User talk:Bird-or-cage|talk]]) 13:29, 10 December 2019 (UTC)<br />
<br />
:But systemd's {{ic|systemd-sysctl.service}} does not invoke {{ic|sysctl --system}}, it runs {{ic|/usr/lib/systemd/systemd-sysctl}}... -- [[User:Lahwaacz|Lahwaacz]] ([[User talk:Lahwaacz|talk]]) 14:19, 10 December 2019 (UTC)</div>Gimahttps://wiki.archlinux.org/index.php?title=QEMU&diff=573462QEMU2019-05-18T11:46:51Z<p>Gima: /* Troubleshooting */ Add solution to high interrupt latency and microstuttering</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:Qemu]]<br />
[[es:QEMU]]<br />
[[fr:Qemu]]<br />
[[ja:QEMU]]<br />
[[ru:QEMU]]<br />
[[zh-hans:QEMU]]<br />
[[zh-hant:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-arch-extra}} - extra architectures support<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|qemu-block-rbd}} - RBD block support <br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
Other GUI front-ends for QEMU:<br />
<br />
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}<br />
* {{App|QtEmu|Graphical user interface for QEMU written in Qt4.|https://qtemu.org/|{{AUR|qtemu}}}}<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is <br />
explicitly told to preallocate. See man qemu-img in section Notes.}} <br />
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss! For a Windows guest, open the "create and format hard disk partitions" control panel.<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the [[#QEMU monitor]] using {{ic|Ctrl+Alt+Shift+2}}, and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} option.<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35,accel=kvm -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.<br />
}}<br />
<br />
== Moving data between host and guest OS ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to a SSH-server running on the guest.<br />
<br />
For example, to bind port 10022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,hostfwd=''tcp::10022-:22''<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@localhost -p10022<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located at {{ic|/tmp/qemu-smb.''pid''-0/smb.conf}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
}}<br />
<br />
=== Mounting a partition inside a raw disk image ===<br />
<br />
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.<br />
<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== With manually specifying byte offset ====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
==== With loop module autodetecting partitions ====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
==== With kpartx ====<br />
<br />
'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
=== Mounting a partition inside a qcow2 image ===<br />
<br />
You may mount a partition inside a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulate virtual disk with MBR using linear RAID ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
You can do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.<br />
<br />
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Alternative: use nbd-server =====<br />
Instead of linear RAID, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
<ol><br />
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
</li><br />
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
</li><br />
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|<nowiki><br />
#!/usr/bin/env python<br />
<br />
import sys<br />
import zlib<br />
<br />
if len(sys.argv) != 2:<br />
print("usage: %s <VM Name>" % sys.argv[0])<br />
sys.exit(1)<br />
<br />
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff<br />
crc = str(hex(crc))[2:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
</nowiki>}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
</li><br />
</ol><br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface&#61;br0 --bind-interfaces --dhcp-range&#61;172.20.0.2,172.20.255.254<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|See [[Network bridge]] for information on creating bridge.}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''bridge0''<br />
allow ''bridge1''<br />
...}}<br />
<br />
Now start the VM. The most basic usage would be:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''<br />
<br />
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
# sysctl net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
USERID=$(whoami)<br />
<br />
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079<br />
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
sudo /usr/bin/ip tuntap add user $USERID mode tap<br />
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
<br />
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.<br />
# macaddr='52:54:be:36:42:a9'<br />
<br />
qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo ip link set dev $IFACE down &> /dev/null<br />
sudo ip tuntap del $IFACE mode tap &> /dev/null<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module loading with systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be executable<br />
<br />
# chmod u+x /etc/systemd/scripts/qemu-network-env<br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
====Alternative method====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|<nowiki><br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
</nowiki>}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|<nowiki><br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you're using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net,netdev=network0<br />
<br />
...become:<br />
<br />
-nic tap,ifname=tap0,script=no,downscript=no,vhost=on,model=virtio-net<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|<nowiki>model=...</nowiki>}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|<nowiki>model=...</nowiki>}}) are related with the device. The same parameters (for example, {{ic|<nowiki>smb=...</nowiki>}}) are used. There's also a special parameter for {{ic|-nic}} which completely disables the default (user-mode) networking:<br />
<br />
-nic none<br />
<br />
See [https://qemu.weilnetz.de/doc/qemu-doc.html#Network-options QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphic card ==<br />
<br />
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor_support|increase vga_memmb]].<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|$ dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
== SPICE ==<br />
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way. SPICE can only be used when using [[#qxl]] as the graphical output.<br />
=== Enabling SPICE via the command line ===<br />
The following is example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.''<br />
{{Tip|<br />
* Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.<br />
* Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system, so it is [https://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports reportedly] better for performance. Example:<br />
{{bc|1=$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing}}<br />
Then connect with {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or with {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}.<br />
}}<br />
<br />
=== Connect to the guest with a SPICE client ===<br />
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:<br />
<br />
{{App|virt-viewer|is the recommended SPICE client by the protocol developers|run it with {{ic|$ remote-viewer spice://127.0.0.1:5930}}|{{Pkg|virt-viewer}}}}<br />
<br />
{{App|spice-gtk|is a GTK+ client which can also be used|run it with {{ic|$ spicy -h 127.0.0.1 -p 5930}}|{{Pkg|spice-gtk}}}}<br />
<br />
{{Tip|To connect to the guest through SSH tunelling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}<br />
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.<br />
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.<br />
}}<br />
<br />
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
=== SPICE support on the guest ===<br />
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. [[Enable]] {{ic|spice-vdagentd.service}} after installation.<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
For guests under '''other operating systems''', refer to the ''Guest'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
=== Password authentication with SPICE ===<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
=== TLS encrypted communication with SPICE ===<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
== VNC ==<br />
<br />
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
=== Basic password authentication ===<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
Alternatively if one create the following file:<br />
<br />
{{hc|vncpassword.txt|change vnc password<br />
''mykvmvncpassword''}}<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ cat vncpassword.txt | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Host ===<br />
<br />
The audio driver used by QEMU is set with the {{ic|QEMU_AUDIO_DRV}} environment variable:<br />
<br />
$ export QEMU_AUDIO_DRV=pa<br />
<br />
Run the following command to get QEMU's configuration options related to PulseAudio:<br />
<br />
$ qemu-system-x86_64 -audio-help | awk '/Name: pa/' RS=<br />
<br />
The listed options can be exported as environment variables, for example:<br />
<br />
{{bc|1=<br />
$ export QEMU_PA_SINK=alsa_output.pci-0000_04_01.0.analog-stereo.monitor<br />
$ export QEMU_PA_SOURCE=input<br />
}}<br />
<br />
=== Guest ===<br />
To get list of the supported emulation audio drivers:<br />
$ qemu-system-x86_64 -soundhw help<br />
<br />
To use e.g. {{ic|hda}} driver for the guest use the {{ic|-soundhw hda}} command with QEMU.<br />
<br />
{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -net nic,model=virtio<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an (Arch) Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.<br />
<br />
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \<br />
-drive file=''/path/to/installer.iso'',index=2,media=cdrom \<br />
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option {{ic|Load Drivers}}.<br />
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".<br />
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.<br />
* Click Next<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change Existing Windows VM to use virtio =====<br />
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.<br />
<br />
You can download the virtio disk driver from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
Now you need to create a new disk image, which will force Windows to search for the driver. For example:<br />
<br />
$ qemu-img create -f qcow2 ''fake.qcow2'' 1G<br />
<br />
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso<br />
<br />
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1). Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio<br />
<br />
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still won't be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and don't forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still won't be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|<nowiki><br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
</nowiki>}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
{{bc|<nowiki><br />
sed -ibak "s/ada/vtbd/g" /etc/fstab<br />
</nowiki>}}<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports. Alternative options of accessing the monitor are described below:<br />
<br />
* [[telnet]]: Run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
$ telnet 127.0.0.1 ''port''<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
* UNIX socket: Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
* TCP: You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
* Standard I/O: It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit all<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== QEMU machine protocol ==<br />
<br />
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].<br />
<br />
=== Start QMP ===<br />
<br />
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait<br />
<br />
Then one way to communicate with the QMP agent is to use [[netcat]]:<br />
<br />
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}<br />
<br />
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:<br />
<br />
{"execute": "qmp_capabilities"}<br />
<br />
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:<br />
<br />
{"execute": "query-commands"}<br />
<br />
=== Live merging of child image into parent image ===<br />
<br />
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:<br />
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}<br />
<br />
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.<br />
<br />
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:<br />
<br />
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}<br />
<br />
Until such a command is issued, the ''commit'' operation remains active.<br />
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.<br />
<br />
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}<br />
<br />
=== Live creation of a new snapshot ===<br />
To create a new snapshot out of a running image, run the command:<br />
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}<br />
<br />
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.<br />
<br />
== Tips and tricks ==<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.<br />
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple cores, assign the guest more cores using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* Use KVM if possible: add {{ic|1=-machine type=pc,accel=kvm}} to the QEMU start command you use.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio<br />
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''<br />
* Use the native Linux AIO:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''<br />
* If you use a qcow2 disk image, I/O performance can be improved considerably by ensuring that the L2 cache is of sufficient size. The [https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/ formula] to calculate L2 cache is: l2_cache_size = disk_size * 8 / cluster_size. Assuming the qcow2 image was created with the default cluster size of 64K, this means that for every 8 GB in size of the qcow2 image, 1 MB of L2 cache is best for performance. Only 1 MB is used by QEMU by default; specifying a larger cache is done on the QEMU command line. For instance, to specify 4 MB of cache (suitable for a 32 GB disk with a cluster size of 64K):<br />
$ qemu-system-x86_64 -drive file=''disk_image'',format=qcow2,l2-cache-size=4M<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:<br />
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0<br />
<br />
See http://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== With systemd service ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|2=<br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
ExecStart=/usr/bin/qemu-${type} -name %i -nographic $args<br />
ExecStop=/bin/sh -c ${haltcmd}<br />
TimeoutStopSec=30<br />
KillMode=none<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
{{Note|According to {{man|5|systemd.service}} and {{ic|5|systemd.kill}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.<br />
}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|type}}, {{ic|args}} and {{ic|altcmd}} set. Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|<nowiki><br />
type="system-x86_64"<br />
<br />
args="-enable-kvm -m 512 -hda /dev/vg0/vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shut down your VM correctly<br />
#haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
</nowiki>}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|<nowiki><br />
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7101"<br />
</nowiki>}}<br />
<br />
The description of the variables is the following:<br />
* {{ic|type}} - QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM.<br />
* {{ic|args}} - QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.<br />
* {{ic|haltcmd}} - Command to shut down a VM safely. In this example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. You can use SSH or some other ways as well.<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
To access physical USB device connected to host from VM, you can use the option: {{ic|-usbdevice host:''vendor_id'':''product_id''}}.<br />
<br />
You can find {{ic|vendor_id}} and {{ic|product_id}} of your device with {{ic|lsusb}} command.<br />
<br />
Since the default I440FX chipset emulated by qemu feature a single UHCI controller (USB 1), the {{ic|-usbdevice}} option will try to attach your physical device to it. In some cases this may cause issues with newer devices. A possible solution is to emulate the [http://wiki.qemu.org/Features/Q35 ICH9] chipset, which offer an EHCI controller supporting up to 12 devices, using the option {{ic|1=-machine type=q35}}.<br />
<br />
A less invasive solution is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device nec-usb-xhci,id=xhci}} respectively and then attach your physical device to it with the option {{ic|1=-device usb-host,..}} as follows:<br />
<br />
-device usb-host,bus='''controller_id'''.0,vendorid=0x'''vendor_id''',productid=0x'''product_id'''<br />
<br />
You can also add the {{ic|1=...,port=''<n>''}} setting to the previous option to specify in which physical port of the virtual controller you want to attach your device, useful in the case you want to add multiple usb devices to the VM.<br />
<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|<nowiki>-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 \<br />
-device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 \<br />
-device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 \<br />
-device usb-redir,chardev=usbredirchardev3,id=usbredirdev3</nowiki>}}<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#Temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}<br />
<br />
=== Multi-monitor support ===<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Copy and paste ===<br />
<br />
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.<br />
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 10.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
{{Note|An administrator account is required to change power settings.}}<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
== Troubleshooting ==<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assign it a dynamic id to group kvm (see [https://bugs.archlinux.org/task/54943 bug]). A workground for avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line:<br />
<br />
group = "78"<br />
<br />
to<br />
<br />
group = "kvm"<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows VM ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the VM may crash unexpectedly, whereas they'd run normally on a physical machine. If, while running {{ic|dmesg -wH}}, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
=== Applications in the VM experience long delays or take a long time to start ===<br />
<br />
This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
=== High interrupt latency and microstuttering ===<br />
<br />
This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games. One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores.<br />
<br />
== See also ==<br />
<br />
* [http://qemu.org Official QEMU website]<br />
* [http://www.linux-kvm.org Official KVM website]<br />
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]<br />
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [http://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/part.virt.qemu.html Managing Virtual Machines with QEMU - OpenSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Gimahttps://wiki.archlinux.org/index.php?title=QEMU&diff=573326QEMU2019-05-16T08:10:48Z<p>Gima: /* Troubleshooting */ Add possible solution to Microstuttering</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:Qemu]]<br />
[[es:QEMU]]<br />
[[fr:Qemu]]<br />
[[ja:QEMU]]<br />
[[ru:QEMU]]<br />
[[zh-hans:QEMU]]<br />
[[zh-hant:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-arch-extra}} - extra architectures support<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|qemu-block-rbd}} - RBD block support <br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
Other GUI front-ends for QEMU:<br />
<br />
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}<br />
* {{App|QtEmu|Graphical user interface for QEMU written in Qt4.|https://qtemu.org/|{{AUR|qtemu}}}}<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is <br />
explicitly told to preallocate. See man qemu-img in section Notes.}} <br />
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss! For a Windows guest, open the "create and format hard disk partitions" control panel.<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the [[#QEMU monitor]] using {{ic|Ctrl+Alt+Shift+2}}, and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} option.<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35,accel=kvm -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.<br />
}}<br />
<br />
== Moving data between host and guest OS ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to a SSH-server running on the guest.<br />
<br />
For example, to bind port 10022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,hostfwd=''tcp::10022-:22''<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@localhost -p10022<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located at {{ic|/tmp/qemu-smb.''pid''-0/smb.conf}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
}}<br />
<br />
=== Mounting a partition inside a raw disk image ===<br />
<br />
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.<br />
<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== With manually specifying byte offset ====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
==== With loop module autodetecting partitions ====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
==== With kpartx ====<br />
<br />
'''kpartx''' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
=== Mounting a partition inside a qcow2 image ===<br />
<br />
You may mount a partition inside a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulate virtual disk with MBR using linear RAID ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
You can do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.<br />
<br />
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Alternative: use nbd-server =====<br />
Instead of linear RAID, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
<ol><br />
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
</li><br />
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
</li><br />
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|<nowiki><br />
#!/usr/bin/env python<br />
<br />
import sys<br />
import zlib<br />
<br />
if len(sys.argv) != 2:<br />
print("usage: %s <VM Name>" % sys.argv[0])<br />
sys.exit(1)<br />
<br />
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff<br />
crc = str(hex(crc))[2:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
</nowiki>}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
</li><br />
</ol><br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface&#61;br0 --bind-interfaces --dhcp-range&#61;172.20.0.2,172.20.255.254<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|See [[Network bridge]] for information on creating bridge.}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''bridge0''<br />
allow ''bridge1''<br />
...}}<br />
<br />
Now start the VM. The most basic usage would be:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''<br />
<br />
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
# sysctl net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
USERID=$(whoami)<br />
<br />
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079<br />
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
sudo /usr/bin/ip tuntap add user $USERID mode tap<br />
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
<br />
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.<br />
# macaddr='52:54:be:36:42:a9'<br />
<br />
qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo ip link set dev $IFACE down &> /dev/null<br />
sudo ip tuntap del $IFACE mode tap &> /dev/null<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module loading with systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be executable<br />
<br />
# chmod u+x /etc/systemd/scripts/qemu-network-env<br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
====Alternative method====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|<nowiki><br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
</nowiki>}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|<nowiki><br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you're using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net,netdev=network0<br />
<br />
...become:<br />
<br />
-nic tap,ifname=tap0,script=no,downscript=no,vhost=on,model=virtio-net<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|<nowiki>model=...</nowiki>}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|<nowiki>model=...</nowiki>}}) are related with the device. The same parameters (for example, {{ic|<nowiki>smb=...</nowiki>}}) are used. There's also a special parameter for {{ic|-nic}} which completely disables the default (user-mode) networking:<br />
<br />
-nic none<br />
<br />
See [https://qemu.weilnetz.de/doc/qemu-doc.html#Network-options QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphic card ==<br />
<br />
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor_support|increase vga_memmb]].<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|$ dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
== SPICE ==<br />
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way. SPICE can only be used when using [[#qxl]] as the graphical output.<br />
=== Enabling SPICE via the command line ===<br />
The following is example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.''<br />
{{Tip|<br />
* Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.<br />
* Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system, so it is [https://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports reportedly] better for performance. Example:<br />
{{bc|1=$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing}}<br />
Then connect with {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or with {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}.<br />
}}<br />
<br />
=== Connect to the guest with a SPICE client ===<br />
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:<br />
<br />
{{App|virt-viewer|is the recommended SPICE client by the protocol developers|run it with {{ic|$ remote-viewer spice://127.0.0.1:5930}}|{{Pkg|virt-viewer}}}}<br />
<br />
{{App|spice-gtk|is a GTK+ client which can also be used|run it with {{ic|$ spicy -h 127.0.0.1 -p 5930}}|{{Pkg|spice-gtk}}}}<br />
<br />
{{Tip|To connect to the guest through SSH tunelling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}<br />
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.<br />
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.<br />
}}<br />
<br />
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
=== SPICE support on the guest ===<br />
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. [[Enable]] {{ic|spice-vdagentd.service}} after installation.<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
For guests under '''other operating systems''', refer to the ''Guest'' section in [http://www.spice-space.org/download.html spice-space download].<br />
<br />
=== Password authentication with SPICE ===<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
=== TLS encrypted communication with SPICE ===<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
== VNC ==<br />
<br />
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
=== Basic password authentication ===<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
Alternatively if one create the following file:<br />
<br />
{{hc|vncpassword.txt|change vnc password<br />
''mykvmvncpassword''}}<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ cat vncpassword.txt | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Host ===<br />
<br />
The audio driver used by QEMU is set with the {{ic|QEMU_AUDIO_DRV}} environment variable:<br />
<br />
$ export QEMU_AUDIO_DRV=pa<br />
<br />
Run the following command to get QEMU's configuration options related to PulseAudio:<br />
<br />
$ qemu-system-x86_64 -audio-help | awk '/Name: pa/' RS=<br />
<br />
The listed options can be exported as environment variables, for example:<br />
<br />
{{bc|1=<br />
$ export QEMU_PA_SINK=alsa_output.pci-0000_04_01.0.analog-stereo.monitor<br />
$ export QEMU_PA_SOURCE=input<br />
}}<br />
<br />
=== Guest ===<br />
To get list of the supported emulation audio drivers:<br />
$ qemu-system-x86_64 -soundhw help<br />
<br />
To use e.g. {{ic|hda}} driver for the guest use the {{ic|-soundhw hda}} command with QEMU.<br />
<br />
{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -net nic,model=virtio<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an (Arch) Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.<br />
<br />
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \<br />
-drive file=''/path/to/installer.iso'',index=2,media=cdrom \<br />
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option {{ic|Load Drivers}}.<br />
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".<br />
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.<br />
* Click Next<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change Existing Windows VM to use virtio =====<br />
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.<br />
<br />
You can download the virtio disk driver from the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso Fedora repository].<br />
<br />
Now you need to create a new disk image, which will force Windows to search for the driver. For example:<br />
<br />
$ qemu-img create -f qcow2 ''fake.qcow2'' 1G<br />
<br />
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso<br />
<br />
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1). Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio<br />
<br />
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still won't be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and don't forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still won't be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|<nowiki><br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
</nowiki>}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
{{bc|<nowiki><br />
sed -ibak "s/ada/vtbd/g" /etc/fstab<br />
</nowiki>}}<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports. Alternative options of accessing the monitor are described below:<br />
<br />
* [[telnet]]: Run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
$ telnet 127.0.0.1 ''port''<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
* UNIX socket: Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
* TCP: You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
* Standard I/O: It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit all<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== QEMU machine protocol ==<br />
<br />
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].<br />
<br />
=== Start QMP ===<br />
<br />
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait<br />
<br />
Then one way to communicate with the QMP agent is to use [[netcat]]:<br />
<br />
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}<br />
<br />
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:<br />
<br />
{"execute": "qmp_capabilities"}<br />
<br />
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:<br />
<br />
{"execute": "query-commands"}<br />
<br />
=== Live merging of child image into parent image ===<br />
<br />
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:<br />
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}<br />
<br />
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.<br />
<br />
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:<br />
<br />
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}<br />
<br />
Until such a command is issued, the ''commit'' operation remains active.<br />
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.<br />
<br />
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}<br />
<br />
=== Live creation of a new snapshot ===<br />
To create a new snapshot out of a running image, run the command:<br />
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}<br />
<br />
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.<br />
<br />
== Tips and tricks ==<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.<br />
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple cores, assign the guest more cores using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* Use KVM if possible: add {{ic|1=-machine type=pc,accel=kvm}} to the QEMU start command you use.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio<br />
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''<br />
* Use the native Linux AIO:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''<br />
* If you use a qcow2 disk image, I/O performance can be improved considerably by ensuring that the L2 cache is of sufficient size. The [https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/ formula] to calculate L2 cache is: l2_cache_size = disk_size * 8 / cluster_size. Assuming the qcow2 image was created with the default cluster size of 64K, this means that for every 8 GB in size of the qcow2 image, 1 MB of L2 cache is best for performance. Only 1 MB is used by QEMU by default; specifying a larger cache is done on the QEMU command line. For instance, to specify 4 MB of cache (suitable for a 32 GB disk with a cluster size of 64K):<br />
$ qemu-system-x86_64 -drive file=''disk_image'',format=qcow2,l2-cache-size=4M<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:<br />
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0<br />
<br />
See http://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== With systemd service ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|2=<br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
ExecStart=/usr/bin/qemu-${type} -name %i -nographic $args<br />
ExecStop=/bin/sh -c ${haltcmd}<br />
TimeoutStopSec=30<br />
KillMode=none<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
{{Note|According to {{man|5|systemd.service}} and {{ic|5|systemd.kill}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.<br />
}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|type}}, {{ic|args}} and {{ic|altcmd}} set. Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|<nowiki><br />
type="system-x86_64"<br />
<br />
args="-enable-kvm -m 512 -hda /dev/vg0/vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shut down your VM correctly<br />
#haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
</nowiki>}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|<nowiki><br />
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7101"<br />
</nowiki>}}<br />
<br />
The description of the variables is the following:<br />
* {{ic|type}} - QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM.<br />
* {{ic|args}} - QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.<br />
* {{ic|haltcmd}} - Command to shut down a VM safely. In this example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. You can use SSH or some other ways as well.<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
To access physical USB device connected to host from VM, you can use the option: {{ic|-usbdevice host:''vendor_id'':''product_id''}}.<br />
<br />
You can find {{ic|vendor_id}} and {{ic|product_id}} of your device with {{ic|lsusb}} command.<br />
<br />
Since the default I440FX chipset emulated by qemu feature a single UHCI controller (USB 1), the {{ic|-usbdevice}} option will try to attach your physical device to it. In some cases this may cause issues with newer devices. A possible solution is to emulate the [http://wiki.qemu.org/Features/Q35 ICH9] chipset, which offer an EHCI controller supporting up to 12 devices, using the option {{ic|1=-machine type=q35}}.<br />
<br />
A less invasive solution is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device nec-usb-xhci,id=xhci}} respectively and then attach your physical device to it with the option {{ic|1=-device usb-host,..}} as follows:<br />
<br />
-device usb-host,bus='''controller_id'''.0,vendorid=0x'''vendor_id''',productid=0x'''product_id'''<br />
<br />
You can also add the {{ic|1=...,port=''<n>''}} setting to the previous option to specify in which physical port of the virtual controller you want to attach your device, useful in the case you want to add multiple usb devices to the VM.<br />
<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|<nowiki>-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 \<br />
-device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 \<br />
-device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 \<br />
-device usb-redir,chardev=usbredirchardev3,id=usbredirdev3</nowiki>}}<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#Temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}<br />
<br />
=== Multi-monitor support ===<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Copy and paste ===<br />
<br />
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.<br />
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 10.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
{{Note|An administrator account is required to change power settings.}}<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
== Troubleshooting ==<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assign it a dynamic id to group kvm (see [https://bugs.archlinux.org/task/54943 bug]). A workground for avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line:<br />
<br />
group = "78"<br />
<br />
to<br />
<br />
group = "kvm"<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows VM ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the VM may crash unexpectedly, whereas they'd run normally on a physical machine. If, while running {{ic|dmesg -wH}}, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
=== Applications in the VM experience long delays or take a long time to start ===<br />
<br />
This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
=== Microstuttering ===<br />
<br />
Random tiny pauses can, among other things, break a gaming experience on a Windows virtual machine with graphics card passthrough. Many things can cause this, but for me, it was caused by the CPU cores jumping between sleeping (as a powersave feature) and waking up. This is controller by [[CPU frequency scaling]]. The default for me was "powersave", and changing it to "performance" eliminated the stutters completely.<br />
<br />
== See also ==<br />
<br />
* [http://qemu.org Official QEMU website]<br />
* [http://www.linux-kvm.org Official KVM website]<br />
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]<br />
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [http://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/part.virt.qemu.html Managing Virtual Machines with QEMU - OpenSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Gimahttps://wiki.archlinux.org/index.php?title=Very_Secure_FTP_Daemon&diff=568432Very Secure FTP Daemon2019-03-11T19:48:32Z<p>Gima: /* Troubleshooting */ Add: "vsftpd: failure to log in with correct password (because of PAM)"</p>
<hr />
<div>[[Category:FTP servers]]<br />
[[cs:Very Secure FTP Daemon]]<br />
[[es:Very Secure FTP Daemon]]<br />
[[it:Very Secure FTP Daemon]]<br />
[[ja:Very Secure FTP Daemon]]<br />
[[ru:Very Secure FTP Daemon]]<br />
[[zh-hans:Very Secure FTP Daemon]]<br />
[https://security.appspot.com/vsftpd.html vsftpd] (''Very Secure FTP Daemon'') is a lightweight, stable and secure FTP server for UNIX-like systems.<br />
<br />
== Installation ==<br />
<br />
[[Install]] {{pkg|vsftpd}} and [[start/enable]] the {{ic|vsftpd.service}} daemon.<br />
<br />
To use [[Wikipedia:xinetd|xinetd]] for monitoring and controlling vsftpd connections, see [[#Using xinetd]].<br />
<br />
== Configuration ==<br />
<br />
Most of the settings in vsftpd are done by editing the file {{ic|/etc/vsftpd.conf}}. The file itself is well-documented, so this section only highlights some important changes you may want to modify. For all available options and documentation, see the {{man|5|vsftpd.conf}} man page. Files are served by default from {{ic|/srv/ftp}}.<br />
<br />
{{Template:Out of date|I believe this information is deprecated. libwrap/tcp-wrappers is not dependency of vsftpd and not installed by default. Better to configure firewall rules to limit access.}}<br />
<br />
Enable connections {{ic|/etc/hosts.allow}}:<br />
<br />
# Allow all connections<br />
vsftpd: ALL<br />
# IP address range<br />
vsftpd: 10.0.0.0/255.255.255.0<br />
<br />
=== Enabling uploading ===<br />
<br />
The {{ic|WRITE_ENABLE}} flag must be set to YES in {{ic|/etc/vsftpd.conf}} in order to allow changes to the filesystem, such as uploading:<br />
<br />
write_enable=YES<br />
<br />
=== Local user login ===<br />
<br />
One must set the line {{ic|local_enable}} in {{ic|/etc/vsftpd.conf}} to {{ic|YES}} in order to allow users in {{ic|/etc/passwd}} to login:<br />
<br />
local_enable=YES<br />
<br />
=== Anonymous login ===<br />
<br />
These lines controls whether anonymous users can login. By default, anonymous logins are enabled for download only from {{ic|/srv/ftp}}:<br />
<br />
{{hc|1=/etc/vsftpd.conf|2=<br />
...<br />
# Allow anonymous FTP? (Beware - allowed by default if you comment this out).<br />
anonymous_enable=YES<br />
...<br />
# Uncomment this to allow the anonymous FTP user to upload files. This only<br />
# has an effect if the above global write enable is activated. Also, you will<br />
# obviously need to create a directory writable by the FTP user.<br />
#anon_upload_enable=YES<br />
#<br />
# Uncomment this if you want the anonymous FTP user to be able to create<br />
# new directories.<br />
#anon_mkdir_write_enable=YES<br />
...<br />
}}<br />
<br />
You may also add e.g. the following options (see {{man|5|vsftpd.conf}} for more):<br />
<br />
{{hc|1=/etc/vsftpd.conf|2=<br />
# No password is required for an anonymous login <br />
no_anon_password=YES<br />
<br />
# Maximum transfer rate for an anonymous client in Bytes/second <br />
anon_max_rate=30000<br />
<br />
# Directory to be used for an anonymous login <br />
anon_root=/example/directory/<br />
}}<br />
<br />
=== Chroot jail ===<br />
<br />
A chroot environment that prevents the user from leaving its home directory can be set up. To enable this, add the following lines to {{ic|/etc/vsftpd.conf}}:<br />
<br />
chroot_list_enable=YES<br />
chroot_list_file=/etc/vsftpd.chroot_list<br />
<br />
The {{ic|chroot_list_file}} variable specifies the file which contains users that are jailed.<br />
<br />
For a more restricted environment, specify the line:<br />
<br />
chroot_local_user=YES<br />
<br />
This will make local users jailed by default. In this case, the file specified by {{ic|chroot_list_file}} lists users that are '''not''' in a chroot jail.<br />
<br />
=== Limiting user login ===<br />
<br />
It's possible to prevent users from logging into the FTP server by adding two lines to {{ic|/etc/vsftpd.conf}}:<br />
<br />
userlist_enable=YES<br />
userlist_file=/etc/vsftpd.user_list<br />
<br />
{{ic|userlist_file}} now specifies the file which lists users that are not able to login.<br />
<br />
If you only want to allow certain users to login, add the line:<br />
<br />
userlist_deny=NO<br />
<br />
The file specified by {{ic|userlist_file}} will now contain users that are able to login.<br />
<br />
=== Limiting connections ===<br />
<br />
The data transfer rate, i.e. number of clients and connections per IP for local users can be limited by adding the information in {{ic|/etc/vsftpd.conf}}:<br />
<br />
local_max_rate=1000000 # Maximum data transfer rate in bytes per second<br />
max_clients=50 # Maximum number of clients that may be connected<br />
max_per_ip=2 # Maximum connections per IP<br />
<br />
=== Using xinetd ===<br />
<br />
Xinetd provides enhanced capabilities for monitoring and controlling connections. It is not necessary though for a basic good working vsftpd-server.<br />
<br />
Installation of vsftpd will add a necessary service file, {{ic|/etc/xinetd.d/vsftpd}}. By default services are disabled. Enable the ftp service:<br />
<br />
{{bc|1=<br />
service ftp<br />
{<br />
socket_type = stream<br />
wait = no<br />
user = root<br />
server = /usr/bin/vsftpd<br />
log_on_success += HOST DURATION<br />
log_on_failure += HOST<br />
disable = no<br />
}<br />
}}<br />
<br />
If you have set the vsftpd daemon to run in standalone mode make the following change in {{ic|/etc/vsftpd.conf}}:<br />
<br />
listen=NO<br />
<br />
Otherwise connection will fail:<br />
<br />
500 OOPS: could not bind listening IPv4 socket<br />
<br />
Instead of starting the vsftpd daemon start and [[enable]] {{ic|xinetd.service}}.<br />
<br />
=== Using SSL/TLS to secure FTP ===<br />
<br />
{{Style|Do not duplicate [[OpenSSL#Certificates]].}}<br />
<br />
First, you need a ''X.509 SSL/TLS'' certificate to use TLS. If you do not have one, you can easily generate a self-signed certificate as follows: <br />
<br />
# cd /etc/ssl/certs<br />
# openssl req -x509 -nodes -days 7300 -newkey rsa:2048 -keyout vsftpd.pem -out vsftpd.pem<br />
# chmod 600 vsftpd.pem<br />
<br />
You will be asked questions about your company, etc. As your certificate is not a trusted one, it does not really matter what is filled in, it will just be used for encryption. To use a trusted certificate, you can get one from a certificate authority like [[Let's Encrypt]]. <br />
<br />
Then, edit the configuration file:<br />
<br />
{{hc|/etc/vsftpd.conf|2=<br />
ssl_enable=YES<br />
<br />
# if you accept anonymous connections, you may want to enable this setting<br />
#allow_anon_ssl=NO<br />
<br />
# by default all non anonymous logins and forced to use SSL to send and receive password and data, set to NO to allow non secure connections<br />
force_local_logins_ssl=NO<br />
force_local_data_ssl=NO<br />
<br />
# TLS v1 protocol connections are preferred and this mode is enabled by default while SSL v2 and v3 are disabled<br />
# the settings below are the default ones and do not need to be changed unless you specifically need SSL<br />
#ssl_tlsv1=YES<br />
#ssl_sslv2=NO<br />
#ssl_sslv3=NO<br />
<br />
# provide the path of your certificate and of your private key<br />
# note that both can be contained in the same file or in different files<br />
rsa_cert_file=/etc/ssl/certs/vsftpd.pem<br />
rsa_private_key_file=/etc/ssl/certs/vsftpd.pem<br />
<br />
# this setting is set to YES by default and requires all data connections exhibit session reuse which proves they know the secret of the control channel.<br />
# this is more secure but is not supported by many FTP clients, set to NO for better compatibility<br />
require_ssl_reuse=NO<br />
}}<br />
<br />
=== Resolve hostname in passive mode ===<br />
<br />
To override the IP address vsftpd advertises in passive mode by the hostname of your server and have it DNS resolved at startup, add the following two lines in {{ic|/etc/vsftpd.conf}}:<br />
<br />
pasv_addr_resolve=YES<br />
pasv_address=''yourdomain.org''<br />
<br />
{{Note|<br />
* For dynamic DNS, it is '''not''' necessary to periodically update ''pasv_address'' and restart the server as it can sometimes be read.<br />
* You may not be able to connect in passive mode via LAN anymore, in this case try the active mode instead from the LAN clients.<br />
}}<br />
<br />
=== Port configurations ===<br />
<br />
It may be necessary to adjust the default FTP listening port and the passive mode data ports:<br />
<br />
* For FTP servers exposed to the web, to reduce the likelihood of the server being attacked, the listening port can be changed to something other than the standard port 21. <br />
* To limit the passive mode ports to open ports, a range can be provided.<br />
The ports can be defined in the configuration file as illustrated below:<br />
<br />
{{hc|/etc/vsftpd.conf|2=<br />
listen_port=2211<br />
<br />
pasv_min_port=5000<br />
pasv_max_port=5003<br />
}}<br />
<br />
=== Configuring iptables ===<br />
<br />
Often the server running the FTP daemon is protected by an [[iptables]] firewall. To allow access to the FTP server the corresponding port needs to be opened using something like<br />
<br />
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 21 -j ACCEPT<br />
<br />
This article will not provide any instruction on how to set up iptables but here is an example: [[Simple stateful firewall]].<br />
<br />
There are some kernel modules needed for proper FTP connection handling by iptables that should be referenced here. Among those especially ''nf_conntrack_ftp''. It is needed as FTP uses the given ''listen_port'' (21 by default) for commands only; all the data transfer is done over different ports. These ports are chosen by the FTP daemon at random and for each session (also depending on whether active or passive mode is used). To tell iptables that packets on ports should be accepted, ''nf_conntrack_ftp'' is required. To load it automatically on boot create a new file in {{ic|/etc/modules-load.d}} e.g.:<br />
<br />
# echo nf_conntrack_ftp > /etc/modules-load.d/nf_conntrack_ftp.conf<br />
<br />
If the kernel >= 4.7 you either need to set ''net.netfilter.nf_conntrack_helper=1'' via ''sysctl'' e.g. <br />
<br />
# echo net.netfilter.nf_conntrack_helper=1 > /etc/sysctl.d/70-conntrack.conf<br />
<br />
or use<br />
<br />
# iptables -A PREROUTING -t raw -p tcp --dport 21 -j CT --helper ftp<br />
<br />
== Tips and tricks ==<br />
<br />
=== PAM with virtual users ===<br />
<br />
Since [[PAM]] no longer provides {{ic|pam_userdb.so}} another easy method is to use {{AUR|libpam_pwdfile}}. For environments with many users another option could be {{AUR|pam_mysql}}{{Broken package link|{{aur-mirror|pam_mysql}}}}. This section is however limited to explain how to configure a chroot environment and authentication by {{ic|pam_pwdfile.so}}.<br />
<br />
In this example we create the directory {{ic|vsftpd}}:<br />
<br />
# mkdir /etc/vsftpd<br />
<br />
One option to create and store user names and passwords is to use the Apache generator htpasswd:<br />
<br />
# htpasswd -c /etc/vsftpd/.passwd<br />
<br />
A problem with the above command is that vsftpd might not be able to read the generated MD5 hashed password. If running the same command with the -d switch, crypt() encryption, password become readable by vsftpd, but the downside of this is less security and a password limited to 8 characters. Openssl could be used to produce a MD5 based BSD password with algorithm 1:<br />
<br />
# openssl passwd -1<br />
<br />
Whatever solution the produced {{ic|/etc/vsftpd/.passwd}} should look like this:<br />
<br />
username1:hashed_password1<br />
username2:hashed_password2<br />
...<br />
<br />
Next you need to create a PAM service using {{ic|pam_pwdfile.so}} and the generated {{ic|/etc/vsftpd/.passwd}} file. In this example we create a PAM policy for ''vsftpd'' with the following content:<br />
<br />
{{hc|/etc/pam.d/vsftpd|<br />
auth required pam_pwdfile.so pwdfile /etc/vsftpd/.passwd<br />
account required pam_permit.so<br />
}}<br />
<br />
Now it is time to create a home for the virtual users. In the example {{ic|/srv/ftp}} is decided to host data for virtual users, which also reflects the default directory structure of Arch. First create the general user virtual and make {{ic|/srv/ftp}} its home:<br />
<br />
# useradd -d /srv/ftp virtual<br />
<br />
Make virtual the owner:<br />
<br />
# chown virtual:virtual /srv/ftp<br />
<br />
A basic {{ic|/etc/vsftpd.conf}} with no private folders configured, which will default to the home folder of the virtual user:<br />
<br />
# pointing to the correct PAM service file<br />
pam_service_name=vsftpd<br />
write_enable=YES<br />
hide_ids=YES<br />
listen=YES<br />
connect_from_port_20=YES<br />
anonymous_enable=NO<br />
local_enable=YES<br />
dirmessage_enable=YES<br />
xferlog_enable=YES<br />
chroot_local_user=YES<br />
guest_enable=YES<br />
guest_username=virtual<br />
virtual_use_local_privs=YES<br />
<br />
Some parameters might not be necessary for your own setup. If you want the chroot environment to be writable you will need to add the following to the configuration file:<br />
<br />
allow_writeable_chroot=YES<br />
<br />
Otherwise vsftpd because of default security settings will complain if it detects that chroot is writable.<br />
<br />
[[Start]] {{ic|vsftpd.service}}.<br />
<br />
You should now be able to login from a ftp-client with any of the users and passwords stored in {{ic|/etc/vsftpd/.passwd}}.<br />
<br />
==== Adding private folders for the virtual users ====<br />
<br />
First create directories for users:<br />
<br />
# mkdir /srv/ftp/user1<br />
# mkdir /srv/ftp/user2<br />
# chown virtual:virtual /srv/ftp/user?/<br />
<br />
Then, add the following lines to {{ic|/etc/vsftpd.conf}}:<br />
<br />
local_root=/srv/ftp/$USER<br />
user_sub_token=$USER<br />
<br />
== Troubleshooting ==<br />
<br />
=== vsftpd: Error 421 Service not available, remote server has closed connection ===<br />
<br />
Disabling [[Wikipedia:seccomp|seccomp]] may be necessary to prevent issues with listing directory contents, as reported in {{Bug|50309}}. Try adding the following line to {{ic|/etc/vsftpd.conf}}:<br />
seccomp_sandbox=NO<br />
The issue was fixed according to [https://bugzilla.redhat.com/show_bug.cgi?id=845980 RedHat Bugzilla#845980], but is still reported to cause issues with 4.18 kernels.<br />
<br />
=== vsftpd: refusing to run with writable root inside chroot() ===<br />
<br />
As of vsftpd 2.3.5, the chroot directory that users are locked to must not be writable. This is in order to prevent a security vulnerabilty.<br />
<br />
The safe way to allow upload is to keep chroot enabled, and configure your FTP directories.<br />
<br />
local_root=/srv/ftp/user<br />
<br />
# mkdir -p /srv/ftp/user/upload<br />
# chmod 550 /srv/ftp/user<br />
# chmod 750 /srv/ftp/user/upload<br />
<br />
If you must:<br />
<br />
You can put this into your {{ic|/etc/vsftpd.conf}} to workaround this security enhancement (since vsftpd 3.0.0; from [http://www.benscobie.com/fixing-500-oops-vsftpd-refusing-to-run-with-writable-root-inside-chroot/ Fixing 500 OOPS: vsftpd: refusing to run with writable root inside chroot ()]):<br />
<br />
allow_writeable_chroot=YES<br />
<br />
or alternative:<br />
<br />
Install {{AUR|vsftpd-ext}}{{Broken package link|{{aur-mirror|vsftpd-ext}}}} and set in the conf file allow_writable_root=YES<br />
<br />
=== FileZilla Client: GnuTLS error -8 -15 -110 when connecting via SSL ===<br />
<br />
vsftpd tries to display plain-text error messages in the SSL session. In order to debug this, temporarily disable encryption and you will see the correct error message.[http://ramblings.linkerror.com/?p=45] [https://serverfault.com/questions/772494/vsftpd-list-causes-gnutls-error-15]<br />
<br />
=== vsftpd.service fails to run on boot ===<br />
<br />
If you have enabled {{ic|vsftpd.service}} and it fails to run on boot, make sure it is set to load after {{ic|network.target}} in the service file:<br />
<br />
{{hc|/usr/lib/systemd/system/vsftpd.service|2=<br />
[Unit]<br />
Description=vsftpd daemon<br />
After=network.target<br />
}}<br />
<br />
=== Passive mode replies with the local IP address to a remote connection ===<br />
<br />
If vsftpd returns a local address to a remote connection, like:<br />
<br />
227 Entering Passive Mode (192,168,0,19,192,27).<br />
<br />
It may be that the FTP server is behind a NAT router and while some devices monitor FTP connections and dynamically replace the local IP address specification by the external IP address for packets containing the PASV response, some do not.<br />
<br />
Indicate the external IP address in the vsftpd configuration using:<br />
<br />
pasv_address=''externalIPaddress''<br />
<br />
or alternatively:<br />
<br />
pasv_addr_resolve=YES<br />
pasv_address=''my.domain.name''<br />
<br />
In case internal connection is not possible after this change, one may need to run 2 vsftpd, one for internal and one for external connections.<br />
<br />
{{Tip|To find out whether the NAT router intercepts the PASV response and replaces the internal IP with an external IP, one can check the server response from the client side in TLS mode. The enciphered packets cannot be identified by the router and are not modified.}}<br />
<br />
=== ipv6 only fails with: 500 OOPS: run two copies of vsftpd for IPv4 and IPv6 ===<br />
<br />
you most likely have commented out the line<br />
<br />
# When "listen" directive is enabled, vsftpd runs in standalone mode and<br />
# listens on IPv4 sockets. This directive cannot be used in conjunction<br />
# with the listen_ipv6 directive.<br />
#listen=YES<br />
#<br />
# This directive enables listening on IPv6 sockets. To listen on IPv4 and IPv6<br />
# sockets, you must run two copies of vsftpd with two configuration files.<br />
# Make sure, that one of the listen options is commented !!<br />
listen_ipv6=YES<br />
<br />
instead of setting<br />
<br />
# When "listen" directive is enabled, vsftpd runs in standalone mode and<br />
# listens on IPv4 sockets. This directive cannot be used in conjunction<br />
# with the listen_ipv6 directive.<br />
listen=NO<br />
<br />
=== vsftpd connections fail on a machine using nis with: yp_bind_client_create_v2: RPC: Unable to send ===<br />
<br />
as mentioned on the vsftpd faq page, "...built-in sandboxing uses network isolation on Linux. This<br />
may be interfering with any module that needs to use the network to perform operations or lookups"<br />
<br />
add this undocumented line to your {{ic|/etc/vsftpd.conf}}<br />
<br />
isolate_network=NO<br />
<br />
=== vsftpd: failure to log in with correct password (because of PAM) ===<br />
<br />
PAM definition files were changed at the beginning of 2019 to be more strict. To allow local users to log in again, a PAM configuration file for vsftpd needs to be created.<br />
<br />
Create a file {{ic|/etc/pam.d/vsftpd}} with the following contents:<br />
<br />
#%PAM-1.0<br />
<br />
account required pam_listfile.so onerr=fail item=user sense=allow file=/etc/vsftpd.user_list<br />
account required pam_unix.so<br />
auth required pam_unix.so<br />
<br />
This definition allows logging in only for those users that are listed in the {{ic|/etc/vsftpd.user_list}} file (one username per line), after which they go through the normal password authentication.<br />
<br />
In addition, the service name that vsftpd uses must be changed from the default to {{ic|vsftpd}} by modifying the configuration file {{ic|/etc/vsftpd.conf}}:<br />
<br />
pam_service_name=vsftpd<br />
<br />
== See also ==<br />
<br />
* [http://vsftpd.beasts.org/ vsftpd official homepage]<br />
* [http://vsftpd.beasts.org/vsftpd_conf.html vsftpd.conf man page]<br />
* [https://security.appspot.com/vsftpd/FAQ.txt vsftpd FAQ]</div>Gimahttps://wiki.archlinux.org/index.php?title=QEMU&diff=560450QEMU2018-12-25T19:21:03Z<p>Gima: /* Troubleshooting */ Add notice about insufficient entropy causing unexplained delays for applications inside the VM</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:Qemu]]<br />
[[es:QEMU]]<br />
[[fr:Qemu]]<br />
[[ja:QEMU]]<br />
[[ru:QEMU]]<br />
[[zh-hans:QEMU]]<br />
[[zh-hant:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [http://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu}} package (or {{Pkg|qemu-headless}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-arch-extra}} - extra architectures support<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|qemu-block-rbd}} - RBD block support <br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
Other GUI front-ends for QEMU:<br />
<br />
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}<br />
* {{App|QtEmu|Graphical user interface for QEMU written in Qt4.|https://qtemu.org/|{{AUR|qtemu}}}}<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
<br />
{{Tip|See the [https://en.wikibooks.org/wiki/QEMU/Images QEMU Wikibook] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. For full explanation and workaround see [http://tjworld.net/wiki/Howto/ResizeQemuDiskImages].}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss!<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM must be supported by your processor and kernel, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-enable-kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the QEMU [https://en.wikibooks.org/wiki/QEMU/Monitor Monitor] using {{ic|Ctrl+Alt+Shift+2}}, and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} option.<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 and Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35,accel=kvm -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI PCI passthrough is required.<br />
}}<br />
<br />
== Moving data between host and guest OS ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network Block Device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to a SSH-server running on the guest.<br />
<br />
For example, to bind port 10022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,hostfwd=''tcp::10022-:22''<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@localhost -p10022<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] with an automatically generated {{ic|smb.conf}} file located at {{ic|/tmp/qemu-smb.''pid''-0/smb.conf}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and this is not necessarily very useful since the guest can also access the normal [[Samba]] service on the host if you have set up shares on it.<br />
<br />
To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -net nic -net user,smb=''shared_dir_path''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [http://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
}}<br />
<br />
=== Mounting a partition inside a raw disk image ===<br />
<br />
When the virtual machine is not running, it is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices. This does not work with disk images in special formats, such as qcow2, although those can be mounted using {{ic|qemu-nbd}}.<br />
<br />
{{Warning|You must make sure to unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== With manually specifying byte offset ====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
==== With loop module autodetecting partitions ====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel_modules#Manual_module_handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
==== With kpartx ====<br />
<br />
'''kpartx''' from the {{AUR|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
=== Mounting a partition inside a qcow2 image ===<br />
<br />
You may mount a partition inside a qcow2 image using {{ic|qemu-nbd}}. See [http://en.wikibooks.org/wiki/QEMU/Images#Mounting_an_image_on_the_host Wikibooks].<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you need to change the owner of the partition's device file to that user.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a bootloader to a partition that is itself formatted as a file system and not as a partitioned device with a MBR. Such a virtual machine can be booted either by specifying the [[kernel]] and [[initrd]] manually, or by simulating a disk with a MBR by using linear [[RAID]].<br />
<br />
==== By specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing bootloaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulate virtual disk with MBR using linear RAID ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate a MBR for it so that it can boot using a bootloader such as GRUB.<br />
<br />
You can do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.<br />
<br />
Suppose you have a plain, unmounted {{ic|/dev/hdaN}} partition with some file system on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hdaN}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any bootloader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Alternative: use nbd-server =====<br />
Instead of linear RAID, you may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [http://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
<ol><br />
<li>Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
</li><br />
<li>Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
</li><br />
<li>Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|<nowiki><br />
#!/usr/bin/env python<br />
<br />
import sys<br />
import zlib<br />
<br />
if len(sys.argv) != 2:<br />
print("usage: %s <VM Name>" % sys.argv[0])<br />
sys.exit(1)<br />
<br />
crc = zlib.crc32(sys.argv[1].encode("utf-8")) & 0xffffffff<br />
crc = str(hex(crc))[2:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
</nowiki>}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
</li><br />
</ol><br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{warning|This only works with the TCP and UDP protocols, so ICMP, including {{ic|ping}}, will not work. Do not use {{ic|ping}} to test network connectivity.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See http://www.linux-kvm.com/content/how-maximize-virtio-net-performance-vhost-net for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface&#61;br0 --bind-interfaces --dhcp-range&#61;172.20.0.2,172.20.255.254<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
{{Note|This method is available since QEMU 1.1, see http://wiki.qemu.org/Features/HelperNetworking.}}<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|See [[Network bridge]] for information on creating bridge.}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''bridge0''<br />
allow ''bridge1''<br />
...}}<br />
<br />
Now start the VM. The most basic usage would be:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' ''[...]''<br />
<br />
With multiple taps, the most basic usage requires specifying the VLAN for all additional NICs:<br />
<br />
$ qemu-system-x86_64 -net nic -net bridge,br=''bridge0'' -net nic,vlan=1 -net bridge,vlan=1,br=''bridge1'' ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [http://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
# sysctl net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Now create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
USERID=$(whoami)<br />
<br />
# Get name of newly created TAP device; see https://bbs.archlinux.org/viewtopic.php?pid=1285079#p1285079<br />
precreationg=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
sudo /usr/bin/ip tuntap add user $USERID mode tap<br />
postcreation=$(/usr/bin/ip tuntap list | /usr/bin/cut -d: -f1 | /usr/bin/sort)<br />
IFACE=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
<br />
# This line creates a random MAC address. The downside is the DHCP server will assign a different IP address each time<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
# Instead, uncomment and edit this line to set a static MAC address. The benefit is that the DHCP server will assign the same IP address.<br />
# macaddr='52:54:be:36:42:a9'<br />
<br />
qemu-system-x86_64 -net nic,macaddr=$macaddr -net tap,ifname="$IFACE" $*<br />
<br />
sudo ip link set dev $IFACE down &> /dev/null<br />
sudo ip tuntap del $IFACE mode tap &> /dev/null<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [http://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel modules#Automatic module loading with systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [http://wiki.virtualsquare.org/wiki/index.php/Main_Page the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[pacman|installed]] via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be executable<br />
<br />
# chmod u+x /etc/systemd/scripts/qemu-network-env<br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
====Alternative method====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [http://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|<nowiki><br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
</nowiki>}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for use in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|<nowiki><br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you're using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net,netdev=network0<br />
<br />
...become:<br />
<br />
-nic tap,ifname=tap0,script=no,downscript=no,vhost=on,model=virtio-net<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|<nowiki>model=...</nowiki>}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|<nowiki>model=...</nowiki>}}) are related with the device. The same parameters (for example, {{ic|<nowiki>smb=...</nowiki>}}) are used. There's also a special parameter for {{ic|-nic}} which completely disables the default (user-mode) networking:<br />
<br />
-nic none<br />
<br />
See [https://qemu.weilnetz.de/doc/qemu-doc.html#Network-options QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphics ==<br />
<br />
QEMU can use the following different graphic outputs: {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} and {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use SPICE for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
==== SPICE ====<br />
The [http://spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way. SPICE can only be used when using QXL as the graphical output.<br />
<br />
The following is example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5930,disable-ticketing -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in that device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.''<br />
<br />
{{Tip|Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.}}<br />
<br />
One of the three methods below can be used to connect to the guest using a SPICE client:<br />
# {{pkg|virt-viewer}} is the recommended SPICE client by the protocol developers: {{bc|$ remote-viewer spice://127.0.0.1:5930}}<br />
# {{Pkg|spice-gtk}} can also be used: {{bc|$ spicy -h 127.0.0.1 -p 5930}}<br />
# Other [http://www.spice-space.org/download.html clients], including for other platforms, are also available.<br />
<br />
Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system, so it is [https://unix.stackexchange.com/questions/91774/performance-of-unix-sockets-vs-tcp-ports reportedly] better for performance. Example:<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent -spice unix,addr=/tmp/vm_spice.socket,disable-ticketing<br />
<br />
Then connect via:<br />
<br />
$ remote-viewer spice+unix:///tmp/vm_spice.socket<br />
<br />
or via:<br />
<br />
$ spicy --uri="spice+unix:///tmp/vm_spice.socket"<br />
<br />
For improved support for multiple monitors, clipboard sharing, etc. the following packages should be installed on the guest:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. [[Enable]] {{ic|spice-vdagentd.service}} after installation.<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
* For other operating systems, see the Guest section on [http://www.spice-space.org/download.html SPICE-Space download] page.<br />
<br />
===== Password authentication with SPICE =====<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
===== TLS encryption =====<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-vga virtio}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|$ dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [http://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
=== vnc ===<br />
<br />
One can add the {{ic|-vnc :X}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|X}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
==== Basic password authentication ====<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
Alternatively if one create the following file:<br />
<br />
{{hc|vncpassword.txt|change vnc password<br />
''mykvmvncpassword''}}<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ cat vncpassword.txt | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Host ===<br />
<br />
The audio driver used by QEMU is set with the {{ic|QEMU_AUDIO_DRV}} environment variable:<br />
<br />
$ export QEMU_AUDIO_DRV=pa<br />
<br />
Run the following command to get QEMU's configuration options related to PulseAudio:<br />
<br />
$ qemu-system-x86_64 -audio-help | awk '/Name: pa/' RS=<br />
<br />
The listed options can be exported as environment variables, for example:<br />
<br />
{{bc|1=<br />
$ export QEMU_PA_SINK=alsa_output.pci-0000_04_01.0.analog-stereo.monitor<br />
$ export QEMU_PA_SOURCE=input<br />
}}<br />
<br />
=== Guest ===<br />
To get list of the supported emulation audio drivers:<br />
$ qemu-system-x86_64 -soundhw help<br />
<br />
To use e.g. {{ic|hda}} driver for the guest use the {{ic|-soundhw hda}} command with QEMU.<br />
<br />
{{Note|Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [http://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -boot order=c -drive file=''disk_image'',if=virtio<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -net nic,model=virtio<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an (Arch) Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and bootloader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [http://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities. After installing the package you can enable and start the {{ic|qemu-ga.service}}.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
{{Note|1=The only (reliable) way to upgrade a Windows 8.1 guest to Windows 10 seems to be to temporarily choose cpu core2duo,nx for the install [http://ubuntuforums.org/showthread.php?t=2289210]. After the install, you may revert to other cpu settings (8/8/2015).}}<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
Windows does not come with the virtio drivers. Therefore, you will need to load them during installation. There are basically two ways to do this: via Floppy Disk or via ISO files. Both images can be downloaded from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].<br />
<br />
The floppy disk option is difficult because you will need to press F6 (Shift-F6 on newer Windows) at the very beginning of powering on the QEMU. This is difficult since you need time to connect your VNC console window. You can attempt to add a delay to the boot sequence. See {{man|1|qemu}} for more details about applying a delay at boot.<br />
<br />
The ISO option to load drivers is the preferred way, but it is available only on Windows Vista and Windows Server 2008 and later. The procedure is to load the image with virtio drivers in an additional cdrom device along with the primary disk device and Windows installer:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''/path/to/primary/disk.img'',index=0,media=disk,if=virtio \<br />
-drive file=''/path/to/installer.iso'',index=2,media=cdrom \<br />
-drive file=''/path/to/virtio.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, the Windows installer will ask you for your Product key and perform some additional checks. When it gets to the "Where do you want to install Windows?" screen, it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option {{ic|Load Drivers}}.<br />
* Uncheck the box for "Hide drivers that aren't compatible with this computer's hardware".<br />
* Click the Browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and press OK.<br />
* Click Next<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change Existing Windows VM to use virtio =====<br />
Modifying an existing Windows guest for booting from virtio disk is a bit tricky.<br />
<br />
You can download the virtio disk driver from the [https://fedoraproject.org/wiki/Windows_Virtio_Drivers Fedora repository].<br />
<br />
Now you need to create a new disk image, which will force Windows to search for the driver. For example:<br />
<br />
$ qemu-img create -f qcow2 ''fake.qcow2'' 1G<br />
<br />
Run the original Windows guest (with the boot disk still in IDE mode) with the fake disk (in virtio mode) and a CD-ROM with the driver.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=ide -drive file=''fake.qcow2'',if=virtio -cdrom virtio-win-0.1-81.iso<br />
<br />
Windows will detect the fake disk and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
When the installation is successful, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio<br />
<br />
{{Note|If you encounter the Blue Screen of Death, make sure you did not forget the {{ic|-m}} parameter, and that you do not boot with virtio instead of ide for the system drive before drivers are installed.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-net}} argument as explained above.<br />
<br />
$ qemu-system-x86_64 -m 512 -drive file=''windows_disk_image'',if=virtio -net nic,model=virtio -cdrom virtio-win-0.1-74.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still won't be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and don't forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still won't be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|<nowiki><br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
</nowiki>}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
{{bc|<nowiki><br />
sed -i bak "s/ada/vtbd/g" /etc/fstab<br />
</nowiki>}}<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU Monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU Monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://qemu.weilnetz.de/doc/qemu-doc.html#pcsys_005fmonitor official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU Monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports. Alternative options of accessing the monitor are described below:<br />
<br />
* [[telnet]]: Run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
$ telnet 127.0.0.1 ''port''<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
* UNIX socket: Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
* TCP: You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
* Standard I/O: It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== Tips and tricks ==<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU. If you do not do this, it may be trying to emulate a more generic CPU.<br />
* Especially for Windows guests, enable [http://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple CPUs, assign the guest more CPUs using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* Use KVM if possible: add {{ic|1=-machine type=pc,accel=kvm}} to the QEMU start command you use.<br />
* If supported by drivers in the guest operating system, use [http://wiki.libvirt.org/page/Virtio virtio] for network and/or block devices. For example:<br />
$ qemu-system-x86_64 -net nic,model=virtio -net tap,if=tap0,script=no -drive file=''disk_image'',media=disk,if=virtio<br />
* Use TAP devices instead of user-mode networking. See [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''<br />
* Use the native Linux AIO:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''<br />
* If you use a qcow2 disk image, I/O performance can be improved considerably by ensuring that the L2 cache is of sufficient size. The [https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/ formula] to calculate L2 cache is: l2_cache_size = disk_size * 8 / cluster_size. Assuming the qcow2 image was created with the default cluster size of 64K, this means that for every 8 GB in size of the qcow2 image, 1 MB of L2 cache is best for performance. Only 1 MB is used by QEMU by default; specifying a larger cache is done on the QEMU command line. For instance, to specify 4 MB of cache (suitable for a 32 GB disk with a cluster size of 64K):<br />
$ qemu-system-x86_64 -drive file=''disk_image'',format=qcow2,l2-cache-size=4M<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time:<br />
$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0<br />
<br />
See http://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== Custom script ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|<nowiki><br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="type=system-x86_64" "haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
PIDFile=/tmp/%i.pid<br />
ExecStart=/usr/bin/env qemu-${type} -name %i -nographic -pidfile /tmp/%i.pid $args<br />
ExecStop=/bin/sh -c ${haltcmd}<br />
TimeoutStopSec=30<br />
KillMode=none<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
{{Note|<br />
* According to {{man|5|systemd.service}} and {{ic|5|systemd.kill}} man pages it is necessary to use the {{ic|1=KillMode=none}} option. Otherwise the main qemu process will be killed immediately after the {{ic|ExecStop}} command quits (it simply echoes one string) and your quest system will not be able to shutdown correctly.<br />
* It is necessary to use the {{ic|PIDFile}} option. Otherwise systemd cannot tell whether the main qemu process was terminated and your quest system will not be able to shutdown correctly. On host shutdown it will proceed without waiting for the VM to shutdown.<br />
}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the following variables set:<br />
<br />
; type<br />
: QEMU binary to call. If specified, will be prepended with {{ic|/usr/bin/qemu-}} and that binary will be used to start the VM. I.e. you can boot e.g. {{ic|qemu-system-arm}} images with {{ic|1=type="system-arm"}}.<br />
; args<br />
: QEMU command line to start with. Will always be prepended with {{ic|-name ${vm} -nographic}}.<br />
; haltcmd<br />
: Command to shut down a VM safely. In this example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. You can use SSH or some other ways as well.<br />
<br />
Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|<nowiki><br />
type="system-x86_64"<br />
<br />
args="-enable-kvm -m 512 -hda /dev/vg0/vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \<br />
-net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7100" # or netcat/ncat<br />
<br />
# You can use other ways to shut down your VM correctly<br />
#haltcmd="ssh powermanager@vm1 sudo poweroff"<br />
</nowiki>}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|<nowiki><br />
args="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \<br />
-net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \<br />
-monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="echo 'system_powerdown' | nc localhost 7101"<br />
</nowiki>}}<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try the tip at [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
To access physical USB device connected to host from VM, you can use the option: {{ic|-usbdevice host:''vendor_id'':''product_id''}}.<br />
<br />
You can find {{ic|vendor_id}} and {{ic|product_id}} of your device with {{ic|lsusb}} command.<br />
<br />
Since the default I440FX chipset emulated by qemu feature a single UHCI controller (USB 1), the {{ic|-usbdevice}} option will try to attach your physical device to it. In some cases this may cause issues with newer devices. A possible solution is to emulate the [http://wiki.qemu.org/Features/Q35 ICH9] chipset, which offer an EHCI controller supporting up to 12 devices, using the option {{ic|1=-machine type=q35}}.<br />
<br />
A less invasive solution is to emulate an EHCI (USB 2) or XHCI (USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device nec-usb-xhci,id=xhci}} respectively and then attach your physical device to it with the option {{ic|1=-device usb-host,..}} as follows:<br />
<br />
-device usb-host,bus='''controller_id'''.0,vendorid=0x'''vendor_id''',productid=0x'''product_id'''<br />
<br />
You can also add the {{ic|1=...,port=''<n>''}} setting to the previous option to specify in which physical port of the virtual controller you want to attach your device, useful in the case you want to add multiple usb devices to the VM.<br />
<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|<nowiki>-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 \<br />
-device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 \<br />
-device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 \<br />
-device usb-redir,chardev=usbredirchardev3,id=usbredirdev3</nowiki>}}<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#Temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://www.kernel.org/doc/Documentation/vm/ksm.txt for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory: {{bc|$ grep . /sys/kernel/mm/ksm/*}}}}<br />
<br />
=== Multi-monitor support ===<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Copy and paste ===<br />
<br />
To have copy and paste between the host and the guest you need to enable the spice agent communication channel. It requires to add a virtio-serial device to the guest, and open a port for the spice vdagent. It is also required to install the spice vdagent in guest ({{Pkg|spice-vdagent}} for Arch guests, [http://www.spice-space.org/download.html Windows guest tools] for Windows guests). Make sure the agent is running (and for future, started automatically). See [[#SPICE]] for the necessary procedure to use QEMU with the SPICE protocol.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 10.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
{{Note|An administrator account is required to change power settings.}}<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -net user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
== Troubleshooting ==<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|-show-cursor}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [http://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assign it a dynamic id to group kvm (see [https://bugs.archlinux.org/task/54943 bug]). A workground for avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line:<br />
<br />
group = "78"<br />
<br />
to<br />
<br />
group = "kvm"<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows VM ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the VM may crash unexpectedly, whereas they'd run normally on a physical machine. If, while running {{ic|dmesg -wH}}, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
=== Applications in the VM experience long delays or take a long time to start ===<br />
<br />
This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
== See also ==<br />
<br />
* [http://qemu.org Official QEMU website]<br />
* [http://www.linux-kvm.org Official KVM website]<br />
* [http://qemu.weilnetz.de/qemu-doc.html QEMU Emulator User Documentation]<br />
* [https://en.wikibooks.org/wiki/QEMU QEMU Wikibook]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [http://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [http://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book.virt/part.virt.qemu.html Managing Virtual Machines with QEMU - OpenSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Gimahttps://wiki.archlinux.org/index.php?title=Thunderbird&diff=555500Thunderbird2018-11-17T11:58:47Z<p>Gima: /* Securing */ Clarify and add example for overriding SMTP HELO IP address</p>
<hr />
<div>[[Category:Email clients]]<br />
[[Category:Mozilla]]<br />
[[de:Thunderbird]]<br />
[[fr:Thunderbird]]<br />
[[it:Thunderbird]]<br />
[[ja:Thunderbird]]<br />
{{Related articles start}}<br />
{{Related|Thunderbird/Enigmail}}<br />
{{Related|Firefox}}<br />
{{Related articles end}}<br />
<br />
[https://www.thunderbird.net/en-US/ Thunderbird] is an open source email, news, and chat client previously developed by the Mozilla Foundation.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|thunderbird}} package, with a [https://www.archlinux.org/packages/?q=thunderbird-i18n language pack] if required.<br />
<br />
Other versions include:<br />
<br />
* {{App | Thunderbird Beta | Cutting edge features with relatively-good stability. | https://www.thunderbird.net/channel/ | {{AUR|thunderbird-beta-bin}}}}<br />
* {{App | Thunderbird Earlybird | Experience the newest innovations as they're developed (equivalent to an alpha and Firefox Aurora releases). | https://www.thunderbird.net/channel/ | {{AUR|thunderbird-earlybird}}}}<br />
* {{App | Thunderbird Nightly | Experience the newest innovations with nightly releases (for those that want to work with breakages). | https://ftp.mozilla.org/pub/mozilla.org/thunderbird/nightly/latest-comm-central/ | {{AUR|thunderbird-nightly}}}}<br />
<br />
A version overview, both past and future, can be read on [[MozillaWiki:Releases]].<br />
<br />
== Securing ==<br />
<br />
* Thunderbird sends your system's internal IP address to the configured SMTP server as an argument to the HELO/ELHO SMTP command. This value can be overridden by setting {{ic|mail.smtpserver.default.hello_argument}} to, for example, {{ic|localhost}}. Setting this value may increase the spam score of messages you send. See [http://kb.mozillazine.org/Replace_IP_address_with_name_in_headers] and [http://kb.mozillazine.org/Mail_and_news_settings].<br />
<br />
* To hide Thunderbird's [https://developer.mozilla.org/en-US/docs/Web/HTTP/Gecko_user_agent_string_reference#Linux User Agent], create a new empty {{ic|general.useragent.override}} string entry in the [[#Config Editor]].<br />
<br />
* Thunderbird disables email images by default but enables HTML rendering which may expose IP address and location. To disable this click ''View > Message Body As > Plain Text''.<br />
<br />
* JavaScript is disabled for message content but not for RSS news feeds. To disable set {{ic|javascript.enabled}} to {{ic|false}} in the [[#Config Editor]].<br />
<br />
== Extensions ==<br />
<br />
* {{App|[[Thunderbird/Enigmail|Enigmail]]|Extension for writing and receiving email signed and/or encrypted with the OpenPGP standard.|https://www.enigmail.net|{{Pkg|thunderbird-extension-enigmail}}, {{AUR|thunderbird-enigmail-git}}}}<br />
* {{App|TorBirdy|Extension that configures Thunderbird to make connections over the [[Tor]] anonymity network|[https://addons.mozilla.org/thunderbird/addon/torbirdy/ TorBirdy AMO]|}}<br />
* {{App|Birdtray|Birdtray is a system tray new mail notification for Thunderbird 60+ which does not require extensions. Run Thunderbird with a system tray icon.|https://github.com/gyunaev/birdtray|{{AUR|birdtray}}}}<br />
* {{App|FireTray|Adds a customizable system tray icon for Thunderbird|[https://addons.thunderbird.net/de/thunderbird/addon/firetray/ FireTray AMO]|}}<br />
* {{App|[[Wikipedia:Lightning_(software)|Lightning]]|A calendar extension that brings [[Wikipedia:Mozilla Sunbird|Sunbird]]'s functionality to Thunderbird, including CalDAV support. Lightning now ships with Thunderbird, but due to differing release schedules it may have issues in Thunderbird testing releases. See [https://support.mozilla.org/en-US/questions/1211583 Mozilla support forum post]. Also see [https://developer.mozilla.org/en-US/docs/Mozilla/Calendar/Calendar_Versions Lightning Release Schedule].|https://www.thunderbird.net/en-US/calendar/|}}<br />
* {{App|SOGo Connector| Lets you sync address books via CardDAV|https://sogo.nu/download.html#/frontends|{{AUR|thunderbird-sogo-connector-bin}}}}<br />
* {{App|Cardbook|A new addressbook for Thunderbird based on the CARDDav and VCARD standards.|[https://addons.mozilla.org/thunderbird/addon/cardbook/ Cardbook AMO]|}}<br />
<br />
== Tips and tricks ==<br />
<br />
=== Config Editor ===<br />
<br />
Thunderbird can be extensively configured by clicking ''Edit > Preferences > Advanced > General > Config Editor''.<br />
<br />
=== Set the default browser ===<br />
<br />
{{Note|Since version 24 the {{ic|network.protocol-handler.app.*}} keys have no effect and will not be able to set the default browser.}}<br />
<br />
Thunderbird uses the default browser as defined by the [[XDG MIME Applications]]. This is commonly modified by [[desktop environment]]s (for example [[GNOME]]'s Control Center: ''Details > Default Applications > Web'').<br />
<br />
This can be overridden with {{ic|network.protocol-handler.warn-external}} in the [[#Config Editor]]<br />
<br />
If the following is all set to {{ic|false}} (default), set them to {{ic|true}} and Thunderbird will ask you which application to use when you click on a link (remember to also check ''"Remember my choice for .. links"'').<br />
<br />
network.protocol-handler.warn-external.ftp<br />
network.protocol-handler.warn-external.http<br />
network.protocol-handler.warn-external.https<br />
<br />
=== Plain Text mode and font uniformity ===<br />
<br />
Plain Text mode lets you view all your emails without HTML rendering and is available in ''View > Message Body As''. This defaults to the [[Wikipedia:Monospace_(Unicode)|Monospace]] font but the size is still inherited from original system fontconfig settings. The following example will overwrite this with Ubuntu Mono of 10 pixels (available in: {{Pkg|ttf-ubuntu-font-family}}).<br />
<br />
Remember to run {{ic|fc-cache -fv}} to update system font cache. See [[Font configuration]] for more information.<br />
<br />
{{hc|~/.config/fontconfig/fonts.conf|<nowiki><br />
<?xml version="1.0"?><br />
<!DOCTYPE fontconfig SYSTEM "fonts.dtd"><br />
<fontconfig><br />
<match target="pattern"><br />
<test qual="any" name="family"><string>monospace</string></test><br />
<edit name="family" mode="assign" binding="same"><string>Ubuntu Mono</string></edit><br />
<!-- For Thunderbird, lowering default font size to 10 for uniformity --><br />
<edit name="pixelsize" mode="assign"><int>10</int></edit><br />
</match><br />
</fontconfig><br />
</nowiki>}}<br />
<br />
=== Webmail with Thunderbird ===<br />
<br />
:''See upstream Wiki: [http://kb.mozillazine.org/Using_webmail_with_your_email_client Using webmail with your email client].''<br />
<br />
=== Migrate profile to another system ===<br />
<br />
{{Tip|The [https://addons.mozilla.org/thunderbird/addon/importexporttools ImportExportTools] addon offers an option to export and import a profile folder.}}<br />
<br />
Before you start with Importing or Exporting tasks, backup your complete {{ic|~/.thunderbird}} profile:<br />
<br />
$ cp -R ~/.thunderbird /to/backup/folder/<br />
<br />
With migration you just copy your current Thunderbird profile to another PC or a new Thunderbird installation:<br />
<br />
1. Install Thunderbird on the target PC<br />
<br />
2. Start Thunderbird without doing anything and quit it.<br />
<br />
3. Go to your Backup folder of your old Thunderbird installation<br />
<br />
4. Enter the backup profile folder:<br />
<br />
$ cd /to/backup/folder/.thunderbird/<oldrandomnumber>.default/<br />
<br />
5. Copy its content into the target profile folder {{ic|~/.thunderbird/<newrandomnumber>.default/}}<br />
<br />
$ cp -R /to/backup/folder/.thunderbird/<oldrandomnumber>.default/* ~/.thunderbird/<newrandomnumber>.default/<br />
<br />
=== Export and Import ===<br />
<br />
Before you start with Importing or Exporting tasks, backup your complete {{ic|~/.thunderbird}} profile:<br />
<br />
$ cp -R ~/.thunderbird /to/backup/folder/<br />
<br />
If your accounts are broken or you want to join two different Thunderbird installations, you better install one Import and Export AddOn (eg. [https://addons.mozilla.org/thunderbird/addon/importexporttools ImportExportTools AddOn]) to both Thunderbird installations and following this just export and import all your data to the new installation.<br />
<br />
=== Change the default sorting order ===<br />
Thunderbird (up to at least 31.4.0-1) sorts mail by date with the oldest on top without any threading. While this can be changed per folder, it is easier to set a sane default instead as described [https://superuser.com/questions/13518/change-the-default-sorting-order-in-thunderbird here].<br />
<br />
Set these preferences in the [[#Config Editor]]:<br />
<br />
mailnews.default_sort_order = 2 (descending)<br />
mailnews.default_view_flags = 1 (Threaded view)<br />
<br />
=== Maildir support ===<br />
The default message store format is mbox. To enable the use of Maildir, see [[MozillaWiki:Thunderbird/Maildir]]. You basically have to set the following preference in the [[#Config Editor]]:<br />
<br />
mail.serverDefaultStoreContractID = @mozilla.org/msgstore/maildirstore;1<br />
<br />
Some limitations up to at least 31.4.0-1: only the "tmp" and "cur" directories are supported. The "new" directory is completely ignored. The read state of mails are stored in a separate ".msf" file, so initially all local mail using Maildir will be marked as unread even when located in the "cur" directory.<br />
<br />
=== Spell checking ===<br />
<br />
Install {{Pkg|hunspell}} and a [https://www.archlinux.org/packages/?q=hunspell+dict hunspell language dictionary] and restart Thunderbird.<br />
<br />
See the Firefox article for [[Firefox#Firefox does not remember default spell check language|how to set the default spell checking language]].<br />
<br />
=== Native notifications ===<br />
<br />
Enable {{ic|mail.biff.use_system_alert}} in the [[#Config Editor]]. This option means that extensions (such as Gnome Integration) are not needed for these newer versions of Thunderbird.<br />
<br />
=== Theming tweaks ===<br />
<br />
Thunderbird should conform to [[GTK#Themes]] as defined on your system. However, two tweaks are desirable for full consistency. These are most beneficial for dark themes.<br />
<br />
# To view the body of emails with colors following your theme:<br />
## Go to ''Preferences''<br />
## Select the ''Display'' tab<br />
## Click the ''Colors'' button<br />
## Check ''Use system colors''<br />
## Set the option for ''Override the colors specified by the content with my selection above'' to ''Always'' or ''Only with High Contrast themes''<br />
# To view Lightning calendar with colors following your theme:<br />
## Go to ''Preferences''<br />
## Select the ''Calendar'' tab<br />
## Check ''Optimize colors for accessibility''<br />
<br />
Further customization can be attained by creating and editing a {{ic|userchrome.css}}. See [[Firefox/Tweaks#General user interface CSS settings]] and [http://kb.mozillazine.org/UserChrome.css Mozillazine's userchrome page].<br />
<br />
== Troubleshooting ==<br />
<br />
=== LDAP Segfault ===<br />
<br />
An [https://bugzilla.mozilla.org/show_bug.cgi?id=292127 LDAP clash (Bugzilla#292127)] arises on systems configured to use it to fetch user information. A possible [https://bugzilla.mozilla.org/show_bug.cgi?id=292127#c7 workaround] consists of renaming the conflicting bundled LDAP library.<br />
<br />
=== Error: Incoming server already exists ===<br />
<br />
It seems Thunderbird (v24) still has that bug which pops up with "Incoming server already exists" if you want to reinstall a previously deleted account with the same account data afterwards. Unfortunately, if you get this error you can now only clean reinstall Thunderbird: <br />
<br />
1. Make a backup of your current profile:<br />
<br />
$ cp -R ~/.thunderbird /to/backup/folder/<br />
<br />
2. Export all you Accounts, Calendar and Feeds via an AddOn like it's written in ''Export section'' of this Wiki.<br />
3. Uninstall your current Thunderbird installation<br />
<br />
$ pacman -R thunderbird<br />
<br />
4. Remove all your data by deleting your current Thunderbird folder {{ic|rm -R ~/.thunderbird/}}.<br />
<br />
5. Install Thunderbird again:<br />
<br />
$ pacman -S thunderbird<br />
<br />
6. Create your mail accounts, feeds and calendars (empty).<br />
<br />
7. Install the [https://addons.mozilla.org/thunderbird/addon/importexporttools/ ImportExportTools] AddOn<br />
<br />
8. Import all your data.<br />
<br />
=== Thunderbird UI freezes when receiving a new message ===<br />
<br />
If Thunderbird is configured to show an alert when a new message arrives, or at launch, the lack of a notification daemon may freeze the interface (white screen) for many seconds. You can solve this issue by disabling alerts or installing a [[Desktop_notifications#Notification_servers|notification server]].<br />
<br />
=== LC_TIME environment variable not respected ===<br />
<br />
Thunderbird should use the {{ic|LC_TIME}} environment variable for localization, but it might not do so in all contexts. Some problems can be mitigated by setting ''Edit'' > ''Preferences'' > ''Advanced'' > ''Date and Time Formatting'' to ''Regional settings locale'', a setting which was introduced in Thunderbird 56. However, there is a [https://bugzilla.mozilla.org/show_bug.cgi?id=1426907 bug report] for this issue.</div>Gimahttps://wiki.archlinux.org/index.php?title=Cinnamon&diff=480749Cinnamon2017-06-30T11:58:28Z<p>Gima: /* Tips and tricks */ Add "Prevent Cinnamon from overriding xrandr and xinput configuration set in .xinitrc"</p>
<hr />
<div>[[Category:Desktop environments]]<br />
[[ja:Cinnamon]]<br />
[[ru:Cinnamon]]<br />
[[tr:Cinnamon Masaüstü Ortamı]]<br />
[[zh-hans:Cinnamon]]<br />
{{Related articles start}}<br />
{{Related|Nemo}}<br />
{{Related|GNOME}}<br />
{{Related|MATE}}<br />
{{Related|Desktop environment}}<br />
{{Related|Display manager}}<br />
{{Related articles end}}<br />
<br />
[https://github.com/linuxmint/Cinnamon Cinnamon] is a [[desktop environment]] which combines a traditional desktop layout with modern graphical effects. The underlying technology was forked from the [[GNOME]] desktop. As of version 2.0, Cinnamon is a complete desktop environment and not merely a frontend for GNOME like GNOME Shell and Unity.<br />
<br />
== Installation ==<br />
<br />
Cinnamon can be [[installed]] with the package {{Pkg|cinnamon}}.<br />
<br />
== Starting Cinnamon ==<br />
<br />
=== Graphical log-in ===<br />
<br />
Choose ''Cinnamon'' or ''Cinnamon (Software Rendering)'' from the menu in a [[display manager]] of choice. Cinnamon is the 3D accelerated version, which should normally be used. If you experience problems with your video driver (e.g. artifacts or crashing), try the ''Cinnamon (Software Rendering)'' session, which disables 3D acceleration.<br />
<br />
=== Starting Cinnamon manually ===<br />
<br />
If you prefer to start Cinnamon manually from the console, add the following line to [[Xinitrc]]:<br />
<br />
{{hc|~/.xinitrc|<br />
exec cinnamon-session<br />
}}<br />
<br />
If the ''Cinnamon (Software Rendering)'' session is required, use {{ic|cinnamon-session-cinnamon2d}} instead of {{ic|cinnamon-session}}.<br />
<br />
=== Restarting Cinnamon ===<br />
From a command line, execute the following line:<br />
<br />
$ nohup cinnamon --replace > /dev/null 2>&1 &<br />
<br />
== Configuration ==<br />
<br />
Cinnamon is quite easy to configure &mdash; most common settings can be configured graphically. Its usability can be expanded with [http://cinnamon-spices.linuxmint.com/applets applets] and [http://cinnamon-spices.linuxmint.com/extensions extensions], and also it supports [http://cinnamon-spices.linuxmint.com/themes theming]. <br />
<br />
=== Cinnamon settings ===<br />
<br />
''cinnamon-settings'' launches a settings module specified on the command line. Without (correct) arguments, it launches ''System Settings''. For example, to start the panel settings:<br />
<br />
$ cinnamon-settings panel<br />
<br />
To list all available modules:<br />
<br />
$ pacman -Ql cinnamon | awk -F'[_.]' '/cs_.+\.py/ {print $2}'<br />
<br />
; Printers<br />
: For configure printers, install {{Pkg|system-config-printer}} and the {{Pkg|gtk3-print-backends}} package. <br />
; Networking<br />
: To add support for the networking module, enable [[NetworkManager#Configuration|Network Manager]]. In order for NetworkManager to store Wi-Fi passwords, you will need to also install [[GNOME Keyring]].<br />
; Bluetooth<br />
: For Bluetooth device support, install the {{Pkg|blueberry}} package.<br />
<br />
=== Applets and extensions ===<br />
<br />
While an '''applet''' is an addition to the Cinnamon panel, an '''extension''' can fully change the Cinnamon experience. They can be installed from the [[AUR]], ([https://aur.archlinux.org/packages.php?O=0&K=cinnamon-&do_Search=Go package search]), or from inside Cinnamon (''Get more online''):<br />
<br />
$ cinnamon-settings applets<br />
$ cinnamon-settings extensions<br />
<br />
Alternatively, install manually from [http://cinnamon-spices.linuxmint.com/ Cinnamon spices].<br />
<br />
{{Note|If applets do not appear, restart Cinnamon with {{ic|r}} in the {{ic|Alt+F2}} dialog box.}}<br />
<br />
=== Pressing power buttons suspend the system ===<br />
<br />
This is the default behaviour. To change the setting open the {{ic|cinnamon-settings}} panel and click on the "Power Management" option. Change the "When the power button is pressed" option to your desired behaviour.<br />
<br />
=== Manage languages used in Cinnamon ===<br />
<br />
{{Note|The language module was removed from the Cinnamon Control Panel with the release of Cinnamon 2.2. [http://segfault.linuxmint.com/2014/04/cinnamon-2-2/]}}<br />
<br />
*To add/remove languages: see [[Locale]].<br />
*To change between enabled languages: install the {{AUR|mintlocale}} package.<br />
*To change the keyboard layout: navigate to '''System Settings > Hardware > Keyboard > Layouts'''.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Creating custom applets ===<br />
<br />
The official tutorial on creating a Cinnamon ''applet'' can be found [http://developer.linuxmint.com/reference/2.6/cinnamon-tutorials/write-applet.html here].<br />
<br />
=== Default desktop background wallpaper path ===<br />
<br />
When you add a wallpaper from a custom path in Cinnamon Settings, Cinnamon copies it to {{ic|~/.cinnamon/backgrounds}}. Thus, with every change of your wallpaper you would have to add your updated wallpaper again from the settings menu or copy / symlink it manually to {{ic|~/.cinnamon/backgrounds}}.<br />
<br />
=== Show home, filesystem desktop icons ===<br />
<br />
By default Cinnamon starts with desktop icons enabled but with no desktop icons on screen. To show desktop icons for the home folder, the filesystem, the trash, mounted volumes and network servers open Cinnamon settings and click on desktop. Enable the checkboxes of the icons you want to see on screen.<br />
<br />
=== Menu editor ===<br />
<br />
The Menu applet supports launching custom commands. Right click on the applet, click on ''Configure...'' and then ''Open the menu editor''. Select a sub-menu (or create a new one) and select ''New Item''. Set ''Name'', ''Command'' and ''Comment''. Check the launch in terminal checkbox if needed. Leave unchecked for graphical applications. Click ''OK'' and close the menu editor afterwards. The launcher is added to the menu.<br />
<br />
=== Workspaces ===<br />
<br />
A workspace pager can be added to the panel. Right click the panel and choose the option ''Add applets to the panel''. Add the ''Workspace switch'' applet to the panel. To change its position right click on the panel and change the ''Panel edit mode'' on/off switch to on. Click and drag the switcher to the desired position and turn the panel edit mode off when finished.<br />
<br />
By default there are 2 workspaces. To add more, hit {{ic|Control+Alt+Up}} to show all workspaces. Then click on the plus sign button on the right of the screen to add more workspaces.<br />
<br />
Alternatively, you can choose the number by command-line:<br />
<br />
$ gsettings set org.cinnamon.desktop.wm.preferences num-workspaces 4<br />
<br />
Replacing 4 with the number of workspaces you want.<br />
<br />
=== Hide desktop icons ===<br />
<br />
The desktop icons rendering feature is enabled in nemo by default. To disable this feature, change the setting with the following command: <br />
<br />
$ gsettings set org.nemo.desktop show-desktop-icons false<br />
<br />
=== Themes and icons ===<br />
<br />
Linux Mint styled themes and icons can be installed with the {{AUR|mint-x-theme}} and {{AUR|mint-x-icons}} packages. The themes can be edited in {{ic|Settings → Themes → Other settings}}.<br />
<br />
Official Linux Mint Cinnamon themes can be installed using the {{AUR|mint-cinnamon-themes}} package.<br />
<br />
=== Sound events ===<br />
<br />
Cinnamon does not come with sounds used for events like the startup of the desktop that are also used in Linux Mint by default. These sound effects can be installed with the {{AUR|cinnamon-sound-effects}} and {{AUR|mint-sounds}} packages. The sound events can be edited in {{ic|Settings → Sound → Sound Effects}}.<br />
<br />
=== Resize windows by mouse ===<br />
<br />
To resize windows with {{ic|Alt+Right click}}, use {{ic|gsettings}}:<br />
<br />
gsettings set org.cinnamon.desktop.wm.preferences resize-with-right-button true<br />
<br />
=== Portable keybindings ===<br />
<br />
To export your keyboard shortcut keys, you should do:<br />
<br />
dconf dump /org/cinnamon/desktop/keybindings/ >keybindings-backup.dconf<br />
<br />
To later import it (for example) on another computer, do:<br />
<br />
dconf load /org/cinnamon/desktop/keybindings/ <keybindings-backup.dconf<br />
<br />
=== Screenshot ===<br />
<br />
As explained in [[Taking_a_screenshot#Cinnamon|Taking a screenshot]], installing {{Pkg|gnome-screenshot}} will add this functionality. The default shortcut key is {{ic|Prt Sc}} key. This binding ca be changed in the applet ''Menu > Preferences > Keyboard'' under ''Shortcuts > System > Screenshots and Recording''. The default save directory is {{ic|$HOME/Pictures}}, but can be customized with eg.<br />
<br />
$ gsettings set org.gnome.gnome-screenshot auto-save-directory file:///home/''USER''/''some_path''<br />
<br />
=== Prevent Cinnamon from overriding xrandr and xinput configuration set in .xinitrc ===<br />
<br />
Your {{ic|.xinitrc}} has {{ic|exec cinnamon-session}} at the bottom which starts "plugins" provided by {{ic|cinnamon-settings-daemon}} ({{ic|.desktop}} files located in {{ic|/etc/xdg/autostart/}}). Some of these configure your display, keyboard and mouse, thus overriding previous commands in {{ic|.xinitrc}}.<br />
<br />
To disable some of these {{ic|.desktop}} files, you override their content by copying them to your {{ic|$HOME/.config/autostart/}} and add {{ic|Hidden}}={{ic|true}} to them. Next time {{ic|cinnamon-session}} should not start them. To see what is started, add {{ic|--debug}} flag to {{ic|cinnamon-session}} in your {{ic|.xinitrc}}.<br />
<br />
I disabled the following to preserve my custom display, keyboard and mouse settings:<br />
<br />
cinnamon-settings-daemon-a11y-keyboard.desktop<br />
cinnamon-settings-daemon-a11y-settings.desktop<br />
cinnamon-settings-daemon-keyboard.desktop<br />
cinnamon-settings-daemon-mouse.desktop<br />
cinnamon-settings-daemon-xrandr.desktop<br />
<br />
== Troubleshooting ==<br />
<br />
=== cinnamon-settings: No module named Image ===<br />
<br />
If ''cinnamon-settings'' does not start with the message that it cannot find a certain module, e.g. the Image module, it is likely that it uses outdated compiled files which refer to no longer existing file locations. In this case remove all {{ic|*.pyc}} files in {{ic|/usr/lib/cinnamon-settings}} and its sub-folders. See the [https://github.com/linuxmint/Cinnamon/issues/2495 upstream bug report].<br />
<br />
=== Video tearing ===<br />
<br />
Because {{Pkg|muffin}} is based upon {{Pkg|mutter}}, video tearing fixes for [[GNOME]] should also work in Cinnamon. See [[GNOME/Troubleshooting#Tear-free video with Intel HD Graphics]] for more information.<br />
<br />
=== Disable the NetworkManager applet ===<br />
<br />
Even if you do not use [[NetworkManager]] and remove the ''Network Manager'' applet from the default panel, Cinnamon will still load ''nm-applet'' and display it in the system tray.<br />
You cannot uninstall the package, because it is required by {{Pkg|cinnamon}} and {{Pkg|cinnamon-control-center}}, but you can still easily disable it. To do so copy the autostart file from {{ic|/etc/xdg/autostart/nm-applet.desktop}} to {{ic|~/.config/autostart/nm-applet.desktop}}. Open it with your favorite text editor and add at the end {{ic|1=X-GNOME-Autostart-enabled=false}}.<br />
<br />
Alternatively you can disable it is by creating the following symlink:<br />
<br />
$ ln -s /bin/true /usr/local/bin/nm-applet<br />
<br />
The ability to blacklist particular icons from the system tray (such as the ''nm-applet'' icon) has been [https://github.com/linuxmint/Cinnamon/issues/3318 requested upstream].</div>Gima