https://wiki.archlinux.org/api.php?action=feedcontributions&user=Recolic&feedformat=atomArchWiki - User contributions [en]2024-03-28T09:16:15ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=GRUB&diff=803690GRUB2024-03-17T05:09:17Z<p>Recolic: I realized we must run grub-install after upgrading grub & grub.cfg. Example: old grub binary don't know "fwsetup --is-supported" and causing boot loop</p>
<hr />
<div>[[Category:Boot loaders]]<br />
[[Category:GNU]]<br />
[[de:GRUB]]<br />
[[es:GRUB]]<br />
[[ja:GRUB]]<br />
[[pt:GRUB]]<br />
[[ru:GRUB]]<br />
[[zh-hans:GRUB]]<br />
{{Related articles start}}<br />
{{Related|Arch boot process}}<br />
{{Related|Master Boot Record}}<br />
{{Related|GUID Partition Table}}<br />
{{Related|Unified Extensible Firmware Interface}}<br />
{{Related|GRUB Legacy}}<br />
{{Related|/EFI examples}}<br />
{{Related|/Tips and tricks}}<br />
{{Related|Multiboot USB drive}}<br />
{{Related articles end}}<br />
[https://www.gnu.org/software/grub/ GRUB] (GRand Unified Bootloader) is a [[boot loader]]. The current GRUB is also referred to as '''GRUB 2'''. The original GRUB, or [[GRUB Legacy]], corresponds to versions 0.9x. This page exclusively describes GRUB 2.<br />
<br />
{{Note|In the entire article {{ic|''esp''}} denotes the mountpoint of the [[EFI system partition]] aka ESP.}}<br />
<br />
== Supported file systems ==<br />
<br />
GRUB bundles its own support for [https://www.gnu.org/software/grub/manual/grub/html_node/Features.html#Features multiple file systems], notably [[FAT32]], [[ext4]], [[Btrfs]] or [[XFS]]. See [[#Unsupported file systems]] for some caveats.<br />
<br />
{{Warning|File systems can get new features not yet supported by GRUB, making them unsuitable for {{ic|/boot}} unless disabling incompatible features. This can be typically avoided by using a separate [[Partitioning#/boot|/boot partition]] with a universally supported file system such as [[FAT32]].}}<br />
<br />
== UEFI systems ==<br />
<br />
{{Note|<br />
* It is recommended to read and understand the [[Unified Extensible Firmware Interface]], [[Partitioning#GUID Partition Table]] and [[Arch boot process#Under UEFI]] pages.<br />
* When installing to use UEFI it is important to boot the installation media in UEFI mode, otherwise ''efibootmgr'' will not be able to add the GRUB UEFI boot entry. Installing to the [[#Default/fallback boot path|fallback boot path]] will still work even in BIOS mode since it does not touch the NVRAM.<br />
* To boot from a disk using UEFI, an EFI system partition is required. Follow [[EFI system partition#Check for an existing partition]] to find out if you have one already, otherwise you need to create it.<br />
* This whole article assumes that inserting additional GRUB2 modules via {{ic|insmod}} is possible. As discussed in [[#Shim-lock]], this is not the case on UEFI systems with Secure Boot enabled. If you want to use any additional GRUB module that is not included in the standard GRUB EFI file {{ic|grubx64.efi}} on a Secure Boot system, you have to re-generate the GRUB EFI {{ic|grubx64.efi}} with {{ic|grub-mkstandalone}} or reinstall GRUB using {{ic|grub-install}} with the additional GRUB modules included.<br />
}}<br />
<br />
=== Installation ===<br />
<br />
{{Note|<br />
* UEFI firmwares are not implemented consistently across manufacturers. The procedure described below is intended to work on a wide range of UEFI systems but those experiencing problems despite applying this method are encouraged to share detailed information, and if possible the workarounds found, for their hardware-specific case. A [[/EFI examples]] article has been provided for such cases.<br />
* The section assumes you are installing GRUB for x64 (64-bit) UEFI. For IA32 (32-bit) UEFI (not to be confused with 32-bit CPUs), replace {{ic|x86_64-efi}} with {{ic|i386-efi}} where appropriate. Follow the instructions in [[Unified Extensible Firmware Interface#Checking the firmware bitness]] to figure out your UEFI's bitness.<br />
}}<br />
<br />
{{Warning|Starting with {{Pkg|grub}} 2:2.06.r566.g857af0e17-1, booting on IA32 UEFI (target {{ic|i386-efi}}) is broken. See {{Bug|79098}}.}}<br />
<br />
First, [[install]] the packages {{Pkg|grub}} and {{Pkg|efibootmgr}}: ''GRUB'' is the boot loader while ''efibootmgr'' is used by the GRUB installation script to write boot entries to NVRAM. <br />
<br />
Then follow the below steps to install GRUB to your disk:<br />
<br />
# [[EFI system partition#Mount the partition|Mount the EFI system partition]] and in the remainder of this section, substitute {{ic|''esp''}} with its mount point.<br />
# Choose a boot loader identifier, here named {{ic|GRUB}}. A directory of that name will be created in {{ic|''esp''/EFI/}} to store the EFI binary and this is the name that will appear in the UEFI boot menu to identify the GRUB boot entry.<br />
# Execute the following command to install the GRUB EFI application {{ic|grubx64.efi}} to {{ic|''esp''/EFI/GRUB/}} and install its modules to {{ic|/boot/grub/x86_64-efi/}}. <br />
::{{Note|<br />
::* Make sure to install the packages and run the {{ic|grub-install}} command from the system in which GRUB will be installed as the boot loader. That means if you are booting from the live installation environment, you need to be inside the chroot when running {{ic|grub-install}}. If for some reason it is necessary to run {{ic|grub-install}} from outside of the installed system, append the {{ic|1=--boot-directory=}} option with the path to the mounted {{ic|/boot}} directory, e.g {{ic|1=--boot-directory=/mnt/boot}}.<br />
::* Some motherboards cannot handle {{ic|bootloader-id}} with spaces in it.}}<br />
::{{bc|1=# grub-install --target=x86_64-efi --efi-directory=''esp'' --bootloader-id=GRUB}}<br />
<br />
After the above installation completed, the main GRUB directory is located at {{ic|/boot/grub/}}. Read [[/Tips and tricks#Alternative install method]] for how to specify an alternative location. Note that {{ic|grub-install}} also tries to [[#Create a GRUB entry in the firmware boot manager|create an entry in the firmware boot manager]], named {{ic|GRUB}} in the above example – this will, however, fail if your boot entries are full; use [[efibootmgr]] to remove unnecessary entries.<br />
<br />
Remember to [[#Generate the main configuration file]] after finalizing the configuration.<br />
<br />
{{Tip|If you use the option {{ic|--removable}} then GRUB will be installed to {{ic|''esp''/EFI/BOOT/BOOTX64.EFI}} (or {{ic|''esp''/EFI/BOOT/BOOTIA32.EFI}} for the {{ic|i386-efi}} target) and you will have the additional ability of being able to boot from the drive in case EFI variables are reset or you move the drive to another computer. Usually you can do this by selecting the drive itself, similar to how you would using BIOS. If dual booting with Windows, be aware Windows usually places an EFI executable there, but its only purpose is to recreate the UEFI boot entry for Windows. If you are installing GRUB on a [[Mac]], you will have to use this option.<br />
If you execute a UEFI update, this update might delete the existing UEFI boot entries. Therefore, it is a potential fallback strategy to have the "removable" boot entry enabled.}}<br />
<br />
{{Note|<br />
* {{ic|--efi-directory}} and {{ic|--bootloader-id}} are specific to GRUB UEFI, {{ic|--efi-directory}} replaces {{ic|--root-directory}} which is deprecated. <br />
* You might note the absence of a ''device_path'' option (e.g.: {{ic|/dev/sda}}) in the {{ic|grub-install}} command. In fact any ''device_path'' provided will be ignored by the GRUB UEFI install script. Indeed, UEFI boot loaders do not use a MBR bootcode or partition boot sector at all.<br />
}}<br />
<br />
See [[#UEFI|UEFI troubleshooting]] in case of problems. Additionally see [[/Tips and tricks#UEFI further reading]].<br />
<br />
=== Secure Boot support ===<br />
<br />
GRUB fully supports secure boot utilising either CA keys or shim, the installation command however is different depending on which you intend to use.<br />
<br />
{{Warning|<br />
* Incorrectly configuring [[Secure Boot]] can render your system unbootable. If for any reason you cannot boot after enabling secure boot then you should disable it in firmware and reboot the system.<br />
* Loading unnecessary modules in your bootloader has the potential to present a security risk, only use these commands if you need them.<br />
}}<br />
<br />
==== CA Keys ====<br />
<br />
To make use of CA Keys the command is:<br />
<br />
# grub-install --target=x86_64-efi --efi-directory=''esp'' --bootloader-id=GRUB --modules="tpm" --disable-shim-lock<br />
<br />
==== Shim-lock ====<br />
<br />
{{Note|Before following this section you should make sure you have followed the instructions at [[Secure Boot#shim]] and have {{pkg|sbsigntools}} set-up and ready to receive keys.}}<br />
<br />
When using Shim-lock, GRUB can only be successfully booted in Secure Boot mode if its EFI binary includes all of the modules necessary to read the filesystem containing the [[vmlinuz]] and [[initramfs]] images.<br />
<br />
Since GRUB version {{ic|2.06.r261.g2f4430cc0}}, loading modules in Secure Boot Mode via {{ic|insmod}} is no longer allowed, as this would violate the expectation to not sideload arbitrary code. If the GRUB modules are not embedded in the EFI binary, and GRUB tries to sideload/{{ic|insmod}} them, GRUB will fail to boot with the message: <br />
<br />
error: prohibited by secure boot policy<br />
<br />
Ubuntu, according to [https://git.launchpad.net/~ubuntu-core-dev/grub/+git/ubuntu/tree/debian/build-efi-images?h=debian/2.06-2ubuntu12 its official build script], embeds the following GRUB modules in its signed GRUB EFI binary {{ic|grubx64.efi}}: <br />
<br />
* [https://git.launchpad.net/~ubuntu-core-dev/grub/+git/ubuntu/tree/debian/build-efi-images?h=debian/2.06-2ubuntu12#n87 the "basic" modules], necessary for booting from a CD or from a simple-partitioned disk: {{ic|all_video}}, {{ic|boot}}, {{ic|btrfs}}, {{ic|cat}}, {{ic|chain}}, {{ic|configfile}}, {{ic|echo}}, {{ic|efifwsetup}}, {{ic|efinet}}, {{ic|ext2}}, {{ic|fat}}, {{ic|font}}, {{ic|gettext}}, {{ic|gfxmenu}}, {{ic|gfxterm}}, {{ic|gfxterm_background}}, {{ic|gzio}}, {{ic|halt}}, {{ic|help}}, {{ic|hfsplus}}, {{ic|iso9660}}, {{ic|jpeg}}, {{ic|keystatus}}, {{ic|loadenv}}, {{ic|loopback}}, {{ic|linux}}, {{ic|ls}}, {{ic|lsefi}}, {{ic|lsefimmap}}, {{ic|lsefisystab}}, {{ic|lssal}}, {{ic|memdisk}}, {{ic|minicmd}}, {{ic|normal}}, {{ic|ntfs}}, {{ic|part_apple}}, {{ic|part_msdos}}, {{ic|part_gpt}}, {{ic|password_pbkdf2}}, {{ic|png}}, {{ic|probe}}, {{ic|reboot}}, {{ic|regexp}}, {{ic|search}}, {{ic|search_fs_uuid}}, {{ic|search_fs_file}}, {{ic|search_label}}, {{ic|sleep}}, {{ic|smbios}}, {{ic|squash4}}, {{ic|test}}, {{ic|true}}, {{ic|video}}, {{ic|xfs}}, {{ic|zfs}}, {{ic|zfscrypt}}, {{ic|zfsinfo}}<br />
* [https://git.launchpad.net/~ubuntu-core-dev/grub/+git/ubuntu/tree/debian/build-efi-images?h=debian/2.06-2ubuntu12#n147 the "platform-specific" modules] for x86_64-efi architecture, necessary for e.g.:<br />
** {{ic|play}}: to play sounds during boot<br />
** {{ic|cpuid}}: to the CPU at boot<br />
** {{ic|tpm}}: to support Measured Boot / [[TPM|Trusted Platform Modules]]<br />
* [https://git.launchpad.net/~ubuntu-core-dev/grub/+git/ubuntu/tree/debian/build-efi-images?h=debian/2.06-2ubuntu12#n159 the "advanced" modules], consisting of modules:<br />
** {{ic|cryptodisk}}: to boot from [[dm-crypt|plain-mode encrypted]] disks<br />
** {{ic|gcry_''algorithm''}}: to support particular hashing and encryption algorithms<br />
** {{ic|luks}}: to boot from [[LUKS]]-encrypted disks:<br />
** {{ic|lvm}}: to boot from [[LVM]] logical volume disks<br />
** {{ic|mdraid09}}, {{ic|mdraid1x}}, {{ic|raid5rec}}, {{ic|raid6rec}}: to boot from [[RAID]] virtual disks<br />
<br />
You must construct your list of GRUB modules in the form of a shell variable that we denote as {{ic|GRUB_MODULES}}. You can use the [https://git.launchpad.net/~ubuntu-core-dev/grub/+git/ubuntu/tree/debian/build-efi-images latest Ubuntu script] as a starting point, and trim away modules that are not necessary on your system. Omitting modules will make the boot process relatively faster, and save some space on the ESP partition.<br />
<br />
You also need a [https://github.com/rhboot/shim/blob/main/SBAT.md Secure Boot Advanced Targeting (SBAT)] file/section included in the EFI binary, to improve the security; if GRUB is launched from the UEFI shim loader. This SBAT file/section contains metadata about the GRUB binary (version, maintainer, developer, upstream URL) and makes it easier for shim to block certain GRUB versions from being loaded if they have security vulnerabilities[https://eclypsium.com/2020/07/29/theres-a-hole-in-the-boot/#additional][https://wiki.ubuntu.com/SecurityTeam/KnowledgeBase/GRUB2SecureBootBypass2021], as explained in the [https://github.com/rhboot/shim/blob/main/SBAT.md UEFI shim bootloader secure boot life-cycle improvements] document from shim. <br />
<br />
The first-stage UEFI bootloader shim will fail to launch {{ic|grubx64.efi}} if the SBAT section from {{ic|grubx64.efi}} is missing! <br />
<br />
If GRUB is installed, a sample SBAT ''.csv'' file is provided under {{ic|/usr/share/grub/sbat.csv}}. <br />
<br />
Reinstall GRUB using the provided {{ic|/usr/share/grub/sbat.csv}} file and all the needed {{ic|GRUB_MODULES}} and sign it:<br />
<br />
# grub-install --target=x86_64-efi --efi-directory=''esp'' --modules=${GRUB_MODULES} --sbat /usr/share/grub/sbat.csv<br />
# sbsign --key MOK.key --cert MOK.crt --output ''esp''/EFI/GRUB/grubx64.efi ''esp''/EFI/GRUB/grubx64.efi<br />
# cp ''esp''/EFI/GRUB/grubx64.efi ''esp''/EFI/BOOT/grubx64.efi<br />
<br />
Reboot, select the key in ''MokManager'', and Secure Boot should be working.<br />
<br />
==== Using Secure Boot ====<br />
<br />
After installation see [[Secure Boot#Implementing Secure Boot]] for instructions on enabling it. <br />
<br />
If you are using the CA Keys method then key management, enrolment and file signing can be automated by using {{pkg|sbctl}}, see [[Secure Boot#Assisted process with sbctl]] for details.<br />
<br />
== BIOS systems ==<br />
<br />
=== GUID Partition Table (GPT) specific instructions ===<br />
<br />
On a BIOS/[[GPT]] configuration, a [https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html#BIOS-installation BIOS boot partition] is required. GRUB embeds its {{ic|core.img}} into this partition.<br />
<br />
{{Note|<br />
* Before attempting this method keep in mind that not all systems will be able to support this partitioning scheme. Read more on [[Partitioning#GUID Partition Table]].<br />
* The BIOS boot partition is only needed by GRUB on a BIOS/GPT setup. On a BIOS/MBR setup, GRUB uses the post-MBR gap for the embedding the {{ic|core.img}}. On GPT, however, there is no guaranteed unused space before the first partition.<br />
* For [[UEFI]] systems this extra partition is not required, since no embedding of boot sectors takes place in that case. However, UEFI systems still require an [[EFI system partition]].<br />
}}<br />
<br />
Create a mebibyte partition ({{ic|1=+1M}} with ''fdisk'' or ''gdisk'') on the disk with no file system and with partition type GUID {{ic|21686148-6449-6E6F-744E-656564454649}}.<br />
<br />
* Select partition type {{ic|BIOS boot}} for [[fdisk]].<br />
* Select partition type code {{ic|ef02}} for [[gdisk]].<br />
* For [[parted]] set/activate the flag {{ic|bios_grub}} on the partition.<br />
<br />
This partition can be in any position order but has to be on the first 2 TiB of the disk. This partition needs to be created before GRUB installation. When the partition is ready, install the boot loader as per the instructions below.<br />
<br />
The space before the first partition can also be used as the BIOS boot partition though it will be out of GPT alignment specification. Since the partition will not be regularly accessed performance issues can be disregarded, though some disk utilities will display a warning about it. In ''fdisk'' or ''gdisk'' create a new partition starting at sector 34 and spanning to 2047 and set the type. To have the viewable partitions begin at the base consider adding this partition last.<br />
<br />
=== Master Boot Record (MBR) specific instructions ===<br />
<br />
Usually the post-MBR gap (after the 512 byte [[MBR]] region and before the start of the first partition) in many MBR partitioned systems is 31 KiB when DOS compatibility cylinder alignment issues are satisfied in the partition table. However a post-MBR gap of about 1 to 2 MiB is recommended to provide sufficient room for embedding GRUB's {{ic|core.img}} ({{Bug|24103}}). It is advisable to use a partitioning tool that supports 1 MiB [[Partitioning#Partition alignment|partition alignment]] to obtain this space as well as to satisfy other non-512-byte-sector issues (which are unrelated to embedding of {{ic|core.img}}).<br />
<br />
=== Installation ===<br />
<br />
[[Install]] the {{Pkg|grub}} package. (It will replace {{AUR|grub-legacy}} if that is already installed.) Then do:<br />
<br />
# grub-install --target=i386-pc ''/dev/sdX''<br />
<br />
where {{ic|i386-pc}} is deliberately used regardless of your actual architecture, and {{ic|''/dev/sdX''}} is the '''disk''' ('''not a partition''') where GRUB is to be installed. For example {{ic|/dev/sda}} or {{ic|/dev/nvme0n1}}, or {{ic|/dev/mmcblk0}}. See [[Device file#Block device names]] for a description of the block device naming scheme.<br />
<br />
Now you must [[#Generate the main configuration file|generate the main configuration file]].<br />
<br />
If you use [[LVM]] for your {{ic|/boot}}, you can install GRUB on multiple physical disks.<br />
<br />
{{Tip|See [[/Tips and tricks#Alternative installation methods]] for other ways to install GRUB, such as to a USB stick.}}<br />
<br />
See {{man|8|grub-install}} and [https://www.gnu.org/software/grub/manual/grub/html_node/BIOS-installation.html#BIOS-installation GRUB Manual] for more details on the {{ic|grub-install}} command.<br />
<br />
== Configuration ==<br />
<br />
On an installed system, GRUB loads the {{ic|/boot/grub/grub.cfg}} configuration file each boot. You can follow [[#Generated grub.cfg]] for using a tool, or [[#Custom grub.cfg]] for a manual creation.<br />
<br />
=== Generated grub.cfg ===<br />
<br />
This section only covers editing the {{ic|/etc/default/grub}} configuration file. See [[/Tips and tricks]] for more information.<br />
<br />
{{Note|Remember to always [[#Generate the main configuration file|generate the main configuration file]] after making changes to {{ic|/etc/default/grub}} and/or files in {{ic|/etc/grub.d/}}.}}<br />
<br />
{{Warning|Update/reinstall the boot loader (see [[#UEFI systems]] or [[#BIOS systems]]) if a new GRUB version changes the syntax of the configuration file: mismatching configuration can result in an unbootable system.}}<br />
<br />
==== Generate the main configuration file ====<br />
<br />
After the installation, the main configuration file {{ic|/boot/grub/grub.cfg}} needs to be generated. The generation process can be influenced by a variety of options in {{ic|/etc/default/grub}} and scripts in {{ic|/etc/grub.d/}}. For the list of options in {{ic|/etc/default/grub}} and a concise description of each refer to GNU's [https://www.gnu.org/software/grub/manual/grub/html_node/Simple-configuration.html documentation].<br />
<br />
If you have not done additional configuration, the automatic generation will determine the root filesystem of the system to boot for the configuration file. For that to succeed it is important that the system is either booted or chrooted into.<br />
<br />
{{Note|1=<nowiki></nowiki><br />
* The default file path is {{ic|/boot/grub/grub.cfg}}, not {{ic|/boot/grub/i386-pc/grub.cfg}}.<br />
* If you are trying to run ''grub-mkconfig'' in a [[chroot]] or [[systemd-nspawn]] container, you might notice that it does not work: {{ic|grub-probe: error: failed to get canonical path of ''/dev/sdaX''}}. In this case, try using [[arch-chroot]] as described in the [https://bbs.archlinux.org/viewtopic.php?pid=1225067#p1225067 BBS post].<br />
}}<br />
<br />
Use the ''grub-mkconfig'' tool to generate {{ic|/boot/grub/grub.cfg}}:<br />
<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
By default the generation scripts automatically add menu entries for all installed Arch Linux [[kernel]]s to the generated configuration.<br />
<br />
{{Tip|<br />
* Rerun grub-install while updating your existing grub.cfg, because new grub.cfg might use new grub function, causing unexpected behavior with old grub binary.<br />
* After installing or removing a [[kernel]], you just need to re-run the above ''grub-mkconfig'' command.<br />
* For tips on managing multiple GRUB entries, for example when using both {{Pkg|linux}} and {{Pkg|linux-lts}} kernels, see [[/Tips and tricks#Multiple entries]].<br />
}}<br />
<br />
To automatically add entries for other installed operating systems, see [[#Detecting other operating systems]].<br />
<br />
You can add additional custom menu entries by editing {{ic|/etc/grub.d/40_custom}} and re-generating {{ic|/boot/grub/grub.cfg}}. Or you can create {{ic|/boot/grub/custom.cfg}} and add them there. Changes to {{ic|/boot/grub/custom.cfg}} do not require re-running ''grub-mkconfig'', since {{ic|/etc/grub.d/41_custom}} adds the necessary {{ic|source}} statement to the generated configuration file.<br />
<br />
{{Tip|{{ic|/etc/grub.d/40_custom}} can be used as a template to create {{ic|/etc/grub.d/''nn''_custom}}, where {{ic|''nn''}} defines the precedence, indicating the order the script is executed. The order scripts are executed determine the placement in the GRUB boot menu. {{ic|''nn''}} should be greater than {{ic|06}} to ensure necessary scripts are executed first.}}<br />
<br />
See [[#Boot menu entry examples]] for custom menu entry examples.<br />
<br />
==== Detecting other operating systems ====<br />
<br />
To have ''grub-mkconfig'' search for other installed systems and automatically add them to the menu, [[install]] the {{Pkg|os-prober}} package and [[mount]] the partitions from which the other systems boot. Then re-run ''grub-mkconfig''. If you get the following output: {{ic|Warning: os-prober will not be executed to detect other bootable partitions}} then edit {{ic|/etc/default/grub}} and add/uncomment:<br />
<br />
GRUB_DISABLE_OS_PROBER=false<br />
<br />
Then try again.<br />
<br />
{{Note| <br />
* The exact mount point does not matter, ''os-prober'' reads the {{ic|mtab}} to identify places to search for bootable entries.<br />
* Remember to mount the partitions each time you run ''grub-mkconfig'' in order to include the other operating systems every time.<br />
* ''os-prober'' might not work properly when run in a chroot. Try again after rebooting into the system if you experience this. <br />
}}<br />
<br />
{{Tip|You might also want GRUB to remember the last chosen boot entry, see [[/Tips and tricks#Recall previous entry]].}}<br />
<br />
===== Windows =====<br />
<br />
For Windows installed in UEFI mode, make sure the [[EFI system partition]] containing the Windows Boot Manager ({{ic|bootmgfw.efi}}) is mounted. Run {{ic|os-prober}} as root to detect and generate an entry for it.<br />
<br />
For Windows installed in BIOS mode, mount the Windows ''system partition'' (its [[Persistent block device naming#by-label|file system label]] should be {{ic|System Reserved}} or {{ic|SYSTEM}}). Run {{ic|os-prober}} as root to detect and generate an entry for it.<br />
<br />
{{Note|For Windows installed in BIOS mode:<br />
<br />
* NTFS partitions may not always be detected when mounted with the default Linux drivers. If GRUB is not detecting it, try installing [[NTFS-3G]] and remounting.<br />
{{Out of date|Since Windows 7, {{ic|bootmgr}} is placed in the [[Wikipedia:System partition and boot partition#Microsoft definition|system partition]] which is not encrypted.}}<br />
* Encrypted Windows partitions may need to be decrypted before mounting. For BitLocker, this can be done with [[cryptsetup]] or {{AUR|dislocker}}. This should be sufficient for {{Pkg|os-prober}} to add the correct entry.<br />
}}<br />
<br />
==== Additional arguments ====<br />
<br />
To pass custom additional arguments to the Linux image, you can set the {{ic|GRUB_CMDLINE_LINUX}} + {{ic|GRUB_CMDLINE_LINUX_DEFAULT}} variables in {{ic|/etc/default/grub}}. The two are appended to each other and passed to kernel when generating regular boot entries. For the ''recovery'' boot entry, only {{ic|GRUB_CMDLINE_LINUX}} is used in the generation.<br />
<br />
It is not necessary to use both, but can be useful. For example, you could use {{ic|1=GRUB_CMDLINE_LINUX_DEFAULT="resume=UUID=''uuid-of-swap-partition'' quiet"}} where {{ic|''uuid-of-swap-partition''}} is the [[UUID]] of your swap partition to enable resume after [[hibernation]]. This would generate a recovery boot entry without the resume and without {{ic|quiet}} suppressing kernel messages during a boot from that menu entry. Though, the other (regular) menu entries would have them as options.<br />
<br />
By default ''grub-mkconfig'' determines the [[UUID]] of the root filesystem for the configuration. To disable this, uncomment {{ic|1=GRUB_DISABLE_LINUX_UUID=true}}.<br />
<br />
For generating the GRUB recovery entry you have to ensure that {{ic|GRUB_DISABLE_RECOVERY}} is not set to {{ic|true}} in {{ic|/etc/default/grub}}.<br />
<br />
See [[Kernel parameters]] for more info.<br />
<br />
==== Setting the top-level menu entry ====<br />
<br />
By default, ''grub-mkconfig'' sorts the included kernels using {{ic|sort -V}} and uses the first kernel in that list as the top-level entry. This means that, for example, since {{ic|/boot/vmlinuz-linux-lts}} is sorted before {{ic|/boot/vmlinuz-linux}}, if you have both {{Pkg|linux-lts}} and {{Pkg|linux}} installed, the LTS kernel will be the top-level menu entry, which may not be desirable. This can be overridden by specifying {{ic|1=GRUB_TOP_LEVEL=''path_to_kernel''"}} in {{ic|/etc/default/grub}}. For example, to make the regular kernel be the top-level menu entry, you can use {{ic|1=GRUB_TOP_LEVEL="/boot/vmlinuz-linux"}}.<br />
<br />
==== LVM ====<br />
<br />
{{Merge|#Installation|grub-mkconfig is capable of detecting that it needs the {{ic|lvm}} module, specifying it in {{ic|GRUB_PRELOAD_MODULES}} is not required. Move warning to [[#Installation]] & [[#Installation_2]] or create a [[Help:Style#"Known issues" section|Known issues section]] and document it there.}}<br />
<br />
{{Warning|GRUB does not support thin-provisioned logical volumes.}}<br />
<br />
If you use [[LVM]] for your {{ic|/boot}} or {{ic|/}} root partition, make sure that the {{ic|lvm}} module is preloaded:<br />
<br />
{{hc|/etc/default/grub|2=<br />
GRUB_PRELOAD_MODULES="... lvm"<br />
}}<br />
<br />
==== RAID ====<br />
<br />
{{Merge|#Installation|grub-mkconfig is capable of detecting that it needs the {{ic|mdraid09}} and/or {{ic|mdraid1x}} modules, specifying them in {{ic|GRUB_PRELOAD_MODULES}} is not required. Summarize the double grub-install in a note and move it to [[#Installation]]; move {{ic|set root}} stuff to [[#Custom grub.cfg]].}}<br />
<br />
GRUB provides convenient handling of [[RAID]] volumes. You need to load GRUB modules {{ic|mdraid09}} or {{ic|mdraid1x}} to allow you to address the volume natively:<br />
<br />
{{hc|/etc/default/grub|2=<br />
GRUB_PRELOAD_MODULES="... mdraid09 mdraid1x"<br />
}}<br />
<br />
For example, {{ic|/dev/md0}} becomes:<br />
<br />
set root=(md/0)<br />
<br />
whereas a partitioned RAID volume (e.g. {{ic|/dev/md0p1}}) becomes:<br />
<br />
set root=(md/0,1)<br />
<br />
To install grub when using RAID1 as the {{ic|/boot}} partition (or using {{ic|/boot}} housed on a RAID1 root partition), on BIOS systems, simply run ''grub-install'' on both of the drives, such as:<br />
<br />
# grub-install --target=i386-pc --debug /dev/sda<br />
# grub-install --target=i386-pc --debug /dev/sdb<br />
<br />
Where the RAID 1 array housing {{ic|/boot}} is housed on {{ic|/dev/sda}} and {{ic|/dev/sdb}}.<br />
<br />
{{Note|GRUB supports booting from [[Btrfs]] RAID 0/1/10, but ''not'' RAID 5/6. You may use [[mdadm]] for RAID 5/6, which is supported by GRUB.}}<br />
<br />
==== Encrypted /boot ====<br />
<br />
GRUB also has special support for booting with an encrypted {{ic|/boot}}. This is done by unlocking a [[LUKS]] blockdevice in order to read its configuration and load any [[initramfs]] and [[kernel]] from it. This option tries to solve the issue of having an [[dm-crypt/Specialties#Securing the unencrypted boot partition|unencrypted boot partition]].<br />
<br />
{{Tip|{{ic|/boot}} is '''not''' required to be kept in a separate partition; it may also stay under the system's root {{ic|/}} directory tree.}}<br />
<br />
{{Warning|GRUB 2.12rc1 has limited support for LUKS2. See the [[#LUKS2]] section below for details.}}<br />
<br />
To enable this feature encrypt the partition with {{ic|/boot}} residing on it using [[LUKS]] as normal. Then add the following option to {{ic|/etc/default/grub}}:<br />
<br />
{{hc|/etc/default/grub|output=<br />
GRUB_ENABLE_CRYPTODISK=y<br />
}}<br />
<br />
This option is used by grub-install to generate the grub {{ic|core.img}}.<br />
<br />
Make sure to [[#Installation|install grub]] after modifying this option or encrypting the partition.<br />
<br />
Without further changes you will be prompted twice for a passphrase: the first for GRUB to unlock the {{ic|/boot}} mount point in early boot, the second to unlock the root filesystem itself as implemented by the initramfs. You can use a [[Dm-crypt/Device encryption#With a keyfile embedded in the initramfs|keyfile]] to avoid this.<br />
<br />
{{Warning|<br />
* If you want to [[#Generate the main configuration file|generate the main configuration file]], make sure that {{ic|/boot}} is mounted.<br />
* In order to perform system updates involving the {{ic|/boot}} mount point, ensure that the encrypted {{ic|/boot}} is unlocked and mounted before performing an update. With a separate {{ic|/boot}} partition, this may be accomplished automatically on boot by using [[crypttab]] with a [[Dm-crypt/Device encryption#With a keyfile embedded in the initramfs|keyfile]].<br />
}}<br />
<br />
{{Note|<br />
* If you use a special keymap, a default GRUB installation will not know it. This is relevant for how to enter the passphrase to unlock the LUKS blockdevice. See [[/Tips and tricks#Manual configuration of core image for early boot]].<br />
* If you experience issues getting the prompt for a password to display (errors regarding cryptouuid, cryptodisk, or "device not found"), try reinstalling GRUB and appending {{ic|1=--modules="part_gpt part_msdos"}} to the end of your {{ic|grub-install}} command.<br />
}}<br />
<br />
{{Tip|1=You can use [https://bbs.archlinux.org/viewtopic.php?id=234607 pacman hooks] to automount your {{ic|/boot}} when upgrades need to access related files.}}<br />
<br />
===== LUKS2 =====<br />
<br />
Use {{ic|grub-install}} as described in the [[#Installation]] section to create a bootable GRUB image with LUKS support. Note the following caveats:<br />
<br />
* Initial LUKS2 support was added to GRUB 2.06, but with several limitations that are only partially addressed in GRUB 2.12rc1. See [https://savannah.gnu.org/bugs/?55093 GRUB bug #55093].<br />
* Since GRUB 2.12rc1, {{ic|grub-install}} can create a core image to unlock LUKS2. However, it only supports PBKDF2, not Argon2.<br />
* Argon2id (''cryptsetup'' default) and Argon2i PBKDFs are not supported ([https://savannah.gnu.org/bugs/?59409 GRUB bug #59409]), only PBKDF2 is.<br />
<br />
:{{Tip|You can use {{AUR|grub-improved-luks2-git}} that has been patched for LUKS2 as well as Argon support. Note the package's Argon support requires an UEFI system.[https://aur.archlinux.org/packages/grub-improved-luks2-git#comment-911119]}}<br />
<br />
{{Note|Before GRUB 2.12rc1, you had to manually create an EFI binary using {{ic|grub-mkimage}} with a custom GRUB config file. For example, {{ic|/boot/grub/grub-pre.cfg}}, with calls to {{ic|cryptomount}}, {{ic|insmod normal}}, and {{ic|normal}}. This is no longer needed, {{ic|grub-install}} is sufficient. However, you may have to run {{ic|grub-mkconfig -o /boot/grub/grub.cfg}} at least once after upgrading from 2.06.}}<br />
<br />
If you enter an invalid passphrase during boot and end up at the GRUB rescue shell, try {{ic|cryptomount -a}} to mount all (hopefully only one) encrypted partitions or use {{ic|cryptomount -u $crypto_uuid}} to mount a specific one. Then proceed with {{ic|insmod normal}} and {{ic|normal}} as usual.<br />
<br />
If you enter a correct passphrase, but an {{ic|Invalid passphrase}} error is immediately returned, make sure that the right cryptographic modules are specified. Use {{ic|cryptsetup luksDump ''/dev/nvme0n1p2''}} and check whether the hash function (SHA-256, SHA-512) matches the modules ({{ic|gcry_sha256}}, {{ic|gcry_sha512}}) installed and the PBKDF algorithm is pbkdf2. The hash and PBDKDF algorithms can be changed for existing keys by using {{ic|cryptsetup luksConvertKey --hash ''sha256'' --pbkdf pbkdf2 ''/dev/nvme0n1p2''}}. Under normal circumstances it should take a few seconds before the passphrase is processed.<br />
<br />
=== Custom grub.cfg ===<br />
<br />
{{Expansion|Add instructions on how to write a custom {{ic|/boot/grub/grub.cfg}}. See [[User:Eschwartz/Grub]] for a proposed draft.|section=Manually generate grub.cfg}}<br />
<br />
This section describes the manual creation of GRUB boot entries in {{ic|/boot/grub/grub.cfg}} instead of relying on ''grub-mkconfig''.<br />
<br />
A basic GRUB config file uses the following options:<br />
<br />
* {{ic|(hd''X'',''Y'')}} is the partition ''Y'' on disk ''X'', partition numbers starting at 1, disk numbers starting at 0<br />
* {{ic|1=set default=''N''}} is the default boot entry that is chosen after timeout for user action<br />
* {{ic|1=set timeout=''M''}} is the time ''M'' to wait in seconds for a user selection before default is booted<br />
* {{ic|<nowiki>menuentry "title" {entry options}</nowiki>}} is a boot entry titled {{ic|title}}<br />
* {{ic|1=set root=(hd''X'',''Y'')}} sets the boot partition, where the kernel and GRUB modules are stored (boot need not be a separate partition, and may simply be a directory under the "root" partition ({{ic|/}})<br />
<br />
==== LoaderDevicePartUUID ====<br />
<br />
For GRUB to set the {{ic|LoaderDevicePartUUID}} UEFI variable required by {{man|8|systemd-gpt-auto-generator}} for [[systemd#GPT partition automounting|GPT partition automounting]], load the {{ic|bli}} module in {{ic|grub.cfg}}:<br />
<br />
{{bc|1=<br />
if [ "$grub_platform" = "efi" ]; then<br />
insmod bli<br />
fi<br />
}}<br />
<br />
==== Boot menu entry examples ====<br />
<br />
{{Tip|These boot entries can also be used when using a {{ic|/boot/grub/grub.cfg}} generated by ''grub-mkconfig''. Add them to {{ic|/etc/grub.d/40_custom}} and [[#Generate the main configuration file|re-generate the main configuration file]] or add them to {{ic|/boot/grub/custom.cfg}}.}}<br />
<br />
For tips on managing multiple GRUB entries, for example when using both {{Pkg|linux}} and {{Pkg|linux-lts}} kernels, see [[/Tips and tricks#Multiple entries]].<br />
<br />
For [[Archiso]] and [https://archboot.com Archboot] boot menu entries see [[Multiboot USB drive#Boot entries]].<br />
<br />
===== GRUB commands =====<br />
<br />
====== "Shutdown" menu entry ======<br />
<br />
{{bc|<br />
menuentry "System shutdown" {<br />
echo "System shutting down..."<br />
halt<br />
}<br />
}}<br />
<br />
====== "Restart" menu entry ======<br />
<br />
{{bc|<br />
menuentry "System restart" {<br />
echo "System rebooting..."<br />
reboot<br />
}<br />
}}<br />
<br />
====== "UEFI Firmware Settings" menu entry ======<br />
<br />
{{bc|1=<br />
if [ ${grub_platform} == "efi" ]; then<br />
menuentry 'UEFI Firmware Settings' --id 'uefi-firmware' {<br />
fwsetup<br />
}<br />
fi<br />
}}<br />
<br />
===== EFI binaries =====<br />
<br />
When launched in UEFI mode, GRUB can chainload other EFI binaries.<br />
<br />
{{Tip|1=To show these menu entries only when GRUB is launched in UEFI mode, enclose them in the following {{ic|if}} statement:<br />
<br />
{{bc|1=<br />
if [ ${grub_platform} == "efi" ]; then<br />
''place UEFI-only menu entries here''<br />
fi<br />
}}<br />
<br />
}}<br />
<br />
====== UEFI Shell ======<br />
<br />
You can launch [[Unified Extensible Firmware Interface#UEFI Shell|UEFI Shell]] by placing it in the root of the [[EFI system partition]] and adding this menu entry:<br />
<br />
{{bc|1=<br />
menuentry "UEFI Shell" {<br />
insmod fat<br />
insmod chain<br />
search --no-floppy --set=root --file /shellx64.efi<br />
chainloader /shellx64.efi<br />
}<br />
}}<br />
<br />
====== gdisk ======<br />
<br />
Download the [[gdisk#gdisk EFI application|gdisk EFI application]] and copy {{ic|gdisk_x64.efi}} to {{ic|''esp''/EFI/tools/}}.<br />
<br />
{{bc|1=<br />
menuentry "gdisk" {<br />
insmod fat<br />
insmod chain<br />
search --no-floppy --set=root --file /EFI/tools/gdisk_x64.efi<br />
chainloader /EFI/tools/gdisk_x64.efi<br />
}<br />
}}<br />
<br />
====== Chainloading a unified kernel image ======<br />
<br />
If you have a [[unified kernel image]] generated from following [[Secure Boot]] or other means, you can add it to the boot menu. For example:<br />
<br />
{{bc|1=<br />
menuentry "Arch Linux" {<br />
insmod fat<br />
insmod chain<br />
search --no-floppy --set=root --fs-uuid ''FILESYSTEM_UUID''<br />
chainloader /EFI/Linux/arch-linux.efi<br />
}<br />
}}<br />
<br />
===== Dual-booting =====<br />
<br />
====== GNU/Linux ======<br />
<br />
Assuming that the other distribution is on partition {{ic|sda2}}:<br />
<br />
{{bc|1=<br />
menuentry "Other Linux" {<br />
set root=(hd0,2)<br />
linux /boot/vmlinuz (add other options here as required)<br />
initrd /boot/initrd.img (if the other kernel uses/needs one)<br />
}<br />
}}<br />
<br />
Alternatively let GRUB search for the right partition by UUID or file system label:<br />
<br />
{{bc|1=<br />
menuentry "Other Linux" {<br />
# assuming that UUID is 763A-9CB6<br />
search --no-floppy --set=root --fs-uuid 763A-9CB6<br />
<br />
# search by label OTHER_LINUX (make sure that partition label is unambiguous)<br />
#search --no-floppy --set=root --label OTHER_LINUX<br />
<br />
linux /boot/vmlinuz (add other options here as required, for example: root=UUID=763A-9CB6)<br />
initrd /boot/initrd.img (if the other kernel uses/needs one)<br />
}<br />
}}<br />
<br />
If the other distribution has already a valid {{ic|/boot}} folder with installed GRUB, {{ic|grub.cfg}}, kernel and initramfs, GRUB can be instructed to load these other {{ic|grub.cfg}} files on-the-fly during boot. For example, for {{ic|hd0}} and the fourth GPT partition:<br />
<br />
{{bc|1=<br />
menuentry "configfile hd0,gpt4" {<br />
insmod part_gpt<br />
insmod btrfs<br />
insmod ext2<br />
set root='hd0,gpt4'<br />
configfile /boot/grub/grub.cfg<br />
}<br />
}}<br />
<br />
When choosing this entry, GRUB loads the {{ic|grub.cfg}} file from the other volume and displays that menu. Any environment variable changes made by the commands in file will not be preserved after {{ic|configfile}} returns. Press {{ic|Esc}} to return to the first GRUB menu.<br />
<br />
====== Windows installed in UEFI/GPT mode ======<br />
<br />
This mode determines where the Windows boot loader resides and chain-loads it after GRUB when the menu entry is selected. The main task here is finding the EFI system partition and running the bootloader from it.<br />
<br />
{{Note|This menuentry will work only in UEFI boot mode and only if the Windows bitness matches the UEFI bitness. It will not work in BIOS installed GRUB. See [[Dual boot with Windows#Windows UEFI vs BIOS limitations]] and [[Dual boot with Windows#Bootloader UEFI vs BIOS limitations]] for more information.}}<br />
<br />
{{bc|1=<br />
if [ "${grub_platform}" == "efi" ]; then<br />
menuentry "Microsoft Windows Vista/7/8/8.1 UEFI/GPT" {<br />
insmod part_gpt<br />
insmod fat<br />
insmod chain<br />
search --no-floppy --fs-uuid --set=root $hints_string $fs_uuid<br />
chainloader /EFI/Microsoft/Boot/bootmgfw.efi<br />
}<br />
fi<br />
}}<br />
<br />
where {{ic|$hints_string}} and {{ic|$fs_uuid}} are obtained with the following two commands.<br />
<br />
The {{ic|$fs_uuid}} command determines the UUID of the EFI system partition:<br />
<br />
{{hc|1=# grub-probe --target=fs_uuid ''esp''/EFI/Microsoft/Boot/bootmgfw.efi|2=<br />
1ce5-7f28<br />
}}<br />
<br />
Alternatively one can run {{ic|lsblk --fs}} and read the UUID of the EFI system partition from there.<br />
<br />
The {{ic|$hints_string}} command will determine the location of the EFI system partition, in this case harddrive 0:<br />
<br />
{{hc|1=# grub-probe --target=hints_string ''esp''/EFI/Microsoft/Boot/bootmgfw.efi|2=<br />
--hint-bios=hd0,gpt1 --hint-efi=hd0,gpt1 --hint-baremetal=ahci0,gpt1<br />
}}<br />
<br />
These two commands assume the ESP Windows uses is mounted at {{ic|''esp''}}. There might be case differences in the path to Windows's EFI file, what with being Windows, and all.<br />
<br />
====== Windows installed in BIOS/MBR mode ======<br />
<br />
{{Note|GRUB supports booting {{ic|bootmgr}} directly and [https://www.gnu.org/software/grub/manual/grub.html#Chain_002dloading chainloading] of partition boot sector is no longer required to boot Windows in a BIOS/MBR setup.}}<br />
<br />
{{Warning|It is the '''system partition''' that has {{ic|/bootmgr}}, not your "real" Windows partition (usually {{ic|C:}}). The system partition's [[Persistent block device naming#by-label|filesystem label]] is {{ic|System Reserved}} or {{ic|SYSTEM}} and the partition is only about 100 to 549 MiB in size. See [[Wikipedia:System partition and boot partition]] for more information.}}<br />
<br />
Throughout this section, it is assumed your Windows partition is {{ic|/dev/sda1}}. A different partition will change every instance of {{ic|hd0,msdos1}}.<br />
<br />
{{Note|These menu entries will work only in BIOS boot mode. It will not work in UEFI installed GRUB. See [[Dual boot with Windows#Windows UEFI vs BIOS limitations]] and [[Dual boot with Windows#Bootloader UEFI vs BIOS limitations]] .}}<br />
<br />
In both examples {{ic|''XXXX-XXXX''}} is the filesystem UUID which can be found with command {{ic|lsblk --fs}}.<br />
<br />
For Windows Vista/7/8/8.1/10:<br />
<br />
{{bc|1=<br />
if [ "${grub_platform}" == "pc" ]; then<br />
menuentry "Microsoft Windows Vista/7/8/8.1/10 BIOS/MBR" {<br />
insmod part_msdos<br />
insmod ntfs<br />
insmod ntldr<br />
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 ''XXXX-XXXX''<br />
ntldr /bootmgr<br />
}<br />
fi<br />
}}<br />
<br />
For Windows XP:<br />
<br />
{{bc|1=<br />
if [ "${grub_platform}" == "pc" ]; then<br />
menuentry "Microsoft Windows XP" {<br />
insmod part_msdos<br />
insmod ntfs<br />
insmod ntldr<br />
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1 ''XXXX-XXXX''<br />
ntldr /ntldr<br />
}<br />
fi<br />
}}<br />
<br />
{{Note|In some cases, GRUB may be installed without a clean Windows 8, in which case you cannot boot Windows without having an error with {{ic|\boot\bcd}} (error code {{ic|0xc000000f}}). You can fix it by going to Windows Recovery Console ({{ic|cmd.exe}} from install disk) and executing:<br />
<br />
X:\> bootrec.exe /fixboot<br />
X:\> bootrec.exe /RebuildBcd<br />
<br />
Do '''not''' use {{ic|bootrec.exe /Fixmbr}} because it will wipe GRUB out.<br />
Or you can use Boot Repair function in the Troubleshooting menu - it will not wipe out GRUB but will fix most errors.<br />
Also you would better keep plugged in both the target hard drive and your bootable device '''ONLY'''. Windows usually fails to repair boot information if any other devices are connected.<br />
}}<br />
<br />
===== Using labels =====<br />
<br />
It is possible to use file system labels, human-readable strings attached to file systems, by using the {{ic|--label}} option to {{ic|search}}. First of all, [[Persistent block device naming#by-label|make sure your file system has a label]].<br />
<br />
Then, add an entry using labels. An example of this:<br />
<br />
menuentry "Arch Linux, session texte" {<br />
search --label --set=root archroot<br />
linux /boot/vmlinuz-linux root=/dev/disk/by-label/archroot ro<br />
initrd /boot/initramfs-linux.img<br />
}<br />
<br />
== Using the command shell ==<br />
<br />
Since the MBR is too small to store all GRUB modules, only the menu and a few basic commands reside there. The majority of GRUB functionality remains in modules in {{ic|/boot/grub/}}, which are inserted as needed. In error conditions (e.g. if the partition layout changes) GRUB may fail to boot. When this happens, a command shell may appear.<br />
<br />
GRUB offers multiple shells/prompts. If there is a problem reading the menu but the bootloader is able to find the disk, you will likely be dropped to the "normal" shell:<br />
<br />
grub><br />
<br />
If there is a more serious problem (e.g. GRUB cannot find required files), you may instead be dropped to the "rescue" shell:<br />
<br />
grub rescue><br />
<br />
The rescue shell is a restricted subset of the normal shell, offering much less functionality. If dumped to the rescue shell, first try inserting the "normal" module, then starting the "normal" shell:<br />
<br />
grub rescue> set prefix=(hdX,Y)/boot/grub<br />
grub rescue> insmod (hdX,Y)/boot/grub/i386-pc/normal.mod<br />
rescue:grub> normal<br />
<br />
=== Pager support ===<br />
<br />
GRUB supports pager for reading commands that provide long output (like the {{ic|help}} command). This works only in normal shell mode and not in rescue mode. To enable pager, in GRUB command shell type:<br />
<br />
sh:grub> set pager=1<br />
<br />
=== Using the command shell environment to boot operating systems ===<br />
<br />
grub><br />
<br />
The GRUB's command shell environment can be used to boot operating systems.<br />
A common scenario may be to boot Windows / Linux stored on a drive/partition via '''chainloading'''.<br />
<br />
''Chainloading'' means to load another boot-loader from the current one, ie, chain-loading.<br />
<br />
The other bootloader may be embedded at the start of a partitioned disk (MBR), at the start of a partition or a partitionless disk (VBR), or as an EFI binary in the case of UEFI.<br />
<br />
==== Chainloading a partition's VBR ====<br />
<br />
set root=(hdX,Y)<br />
chainloader +1<br />
boot<br />
<br />
X=0,1,2...<br />
Y=1,2,3...<br />
<br />
For example to chainload Windows stored in the first partition of the first hard disk,<br />
<br />
set root=(hd0,1)<br />
chainloader +1<br />
boot<br />
<br />
Similarly GRUB installed to a partition can be chainloaded.<br />
<br />
==== Chainloading a disk's MBR or a partitionless disk's VBR ====<br />
<br />
set root=hdX<br />
chainloader +1<br />
boot<br />
<br />
==== Chainloading Windows/Linux installed in UEFI mode ====<br />
<br />
insmod fat<br />
set root=(hd0,gpt4)<br />
chainloader (${root})/EFI/Microsoft/Boot/bootmgfw.efi<br />
boot<br />
<br />
{{ic|insmod fat}} is used for loading the FAT file system module for accessing the Windows bootloader on the EFI system partition.<br />
{{ic|(hd0,gpt4)}} or {{ic|/dev/sda4}} is the EFI system partition in this example.<br />
The entry in the {{ic|chainloader}} line specifies the path of the ''.efi'' file to be chain-loaded.<br />
<br />
==== Normal loading ====<br />
<br />
See the examples in [[#Using the rescue console]]<br />
<br />
=== Using the rescue console ===<br />
<br />
See [[#Using the command shell]] first. If unable to activate the standard shell, one possible solution is to boot using a live CD or some other rescue disk to correct configuration errors and reinstall GRUB. However, such a boot disk is not always available (nor necessary); the rescue console is surprisingly robust.<br />
<br />
The available commands in GRUB rescue include {{ic|insmod}}, {{ic|ls}}, {{ic|set}}, and {{ic|unset}}. This example uses {{ic|set}} and {{ic|insmod}}. {{ic|set}} modifies variables and {{ic|insmod}} inserts new modules to add functionality.<br />
<br />
Before starting, the user must know the location of their {{ic|/boot}} partition (be it a separate partition, or a subdirectory under their root):<br />
<br />
grub rescue> set prefix=(hd''X'',''Y'')/boot/grub<br />
<br />
where {{ic|''X''}} is the physical drive number and {{ic|''Y''}} is the partition number.<br />
<br />
{{Note|With a separate boot partition, omit {{ic|/boot}} from the path (i.e. type {{ic|1=set prefix=(hd''X'',''Y'')/grub}}).}}<br />
<br />
To expand console capabilities, insert the {{ic|linux}} module:<br />
<br />
grub rescue> insmod i386-pc/linux.mod<br />
<br />
or simply<br />
<br />
grub rescue> insmod linux<br />
<br />
This introduces the {{ic|linux}} and {{ic|initrd}} commands, which should be familiar.<br />
<br />
An example, booting Arch Linux:<br />
<br />
set root=(hd0,5)<br />
linux /boot/vmlinuz-linux root=/dev/sda5<br />
initrd /boot/initramfs-linux.img<br />
boot<br />
<br />
With a separate boot partition (e.g. when using UEFI), again change the lines accordingly:<br />
<br />
{{Note|Since boot is a separate partition and not part of your root partition, you must address the boot partition manually, in the same way as for the prefix variable.}}<br />
<br />
set root=(hd0,5)<br />
linux (hd''X'',''Y'')/vmlinuz-linux root=/dev/sda6<br />
initrd (hd''X'',''Y'')/initramfs-linux.img<br />
boot<br />
<br />
{{Note|If you experienced {{ic|error: premature end of file /YOUR_KERNEL_NAME}} during execution of {{ic|linux}} command, you can try {{ic|linux16}} instead.}}<br />
<br />
After successfully booting the Arch Linux installation, users can correct {{ic|grub.cfg}} as needed and then reinstall GRUB.<br />
<br />
To reinstall GRUB and fix the problem completely, changing {{ic|/dev/sda}} if needed. See [[#Installation]] for details.<br />
<br />
== GRUB removal ==<br />
<br />
=== UEFI systems ===<br />
<br />
Before removing ''grub'', make sure that some other boot loader is installed and configured to take over.<br />
<br />
{{hc|$ efibootmgr|<br />
BootOrder: 0003,0001,0000,0002<br />
Boot0000* Windows Boot Manager HD(2,GPT,4dabbedf-191b-4432-bc09-8bcbd1d7dabf,0x109000,0x32000)/File(\EFI\Microsoft\Boot\bootmgfw.efi)<br />
Boot0001* GRUB HD(2,GPT,4dabbedf-191b-4432-bc09-8bcbd1d7dabf,0x109000,0x32000)/File(\EFI\GRUB\grubx64.efi)<br />
Boot0002* Linux-Firmware-Updater HD(2,GPT,5dabbedf-191b-4432-bc09-8bcbd1d7dabf,0x109000,0x32000)/File(\EFI\arch\fwupdx64.efi)<br />
Boot0003* Linux Boot Manager HD(2,GPT,4dabbedf-191b-4432-bc09-8bcbd1d7dabf,0x109000,0x32000)/File(\EFI\systemd\systemd-bootx64.efi)<br />
}}<br />
<br />
If {{ic|BootOrder}} has ''grub'' as the first entry, install another bootloader to put it in front, such as [[systemd-boot]] above. ''grub'' can then be removed using it's ''bootnum''.<br />
<br />
# efibootmgr --delete-bootnum -b 1<br />
<br />
Also delete the {{ic|''esp''/EFI/grub}} and {{ic|/boot/grub}} directories.<br />
<br />
=== BIOS systems ===<br />
<br />
To replace ''grub'' with any other BIOS boot loader, simply install them, which will overwrite the [[Partitioning#Master Boot Record (bootstrap code)|MBR boot code]].<br />
<br />
{{ic|grub-install}} creates the {{ic|/boot/grub}} directory that needs to be removed manually. Though some users will want to keep it, should they want to install ''grub'' again.<br />
<br />
After migrating to UEFI/GPT one may want to [[dd#Remove bootloader|remove the MBR boot code using dd]].<br />
<br />
== Troubleshooting ==<br />
<br />
=== Unsupported file systems ===<br />
<br />
In case that GRUB does not support the root file system, an alternative {{ic|/boot}} partition with a supported file system must be created. In some cases, the development version of GRUB {{aur|grub-git}} may have native support for the file system.<br />
<br />
If GRUB is used with an unsupported file system it is not able to extract the [[UUID]] of your drive so it uses classic non-persistent {{ic|/dev/''sdXx''}} names instead. In this case you might have to manually edit {{ic|/boot/grub/grub.cfg}} and replace {{ic|1=root=/dev/''sdXx''}} with {{ic|1=root=UUID=''XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX''}}. You can use the {{ic|blkid}} command to get the UUID of your device, see [[Persistent block device naming]].<br />
<br />
While GRUB supports [[F2FS]] since version 2.0.4, it cannot correctly read its boot files from an F2FS partition that was created with the {{ic|extra_attr}} flag enabled.<br />
<br />
=== Enable debug messages ===<br />
<br />
{{Note|This change is overwritten when [[#Generate the main configuration file]].}}<br />
<br />
Add:<br />
<br />
set pager=1<br />
set debug=all<br />
<br />
to {{ic|grub.cfg}}.<br />
<br />
=== msdos-style error message ===<br />
<br />
grub-setup: warn: This msdos-style partition label has no post-MBR gap; embedding will not be possible!<br />
grub-setup: warn: Embedding is not possible. GRUB can only be installed in this setup by using blocklists.<br />
However, blocklists are UNRELIABLE and its use is discouraged.<br />
grub-setup: error: If you really want blocklists, use --force.<br />
<br />
This error may occur when you try installing GRUB in a VMware container. Read more about it [https://bbs.archlinux.org/viewtopic.php?pid=581760#p581760 here]. It happens when the first partition starts just after the MBR (block 63), without the usual space of 1 MiB (2048 blocks) before the first partition. Read [[#Master Boot Record (MBR) specific instructions]]<br />
<br />
=== UEFI ===<br />
<br />
==== Common installation errors ====<br />
<br />
* An error that may occur on some UEFI devices is {{ic|Could not prepare Boot variable: Read-only file system}}. You have to remount {{ic|/sys/firmware/efi/efivars}} with read-write enabled. {{bc|# mount -o remount,rw,nosuid,nodev,noexec --types efivarfs efivarfs /sys/firmware/efi/efivars}} See the [[Gentoo:Handbook:AMD64/Installation/Bootloader#Install|Gentoo Wiki]] on installing the [[boot loader]].<br />
* If you have a problem running ''grub-install'' with ''sysfs'' or ''procfs'' and it says you must run {{ic|modprobe efivarfs}} try [[Unified Extensible Firmware Interface#Mount efivarfs|mounting the efivarfs]] with the command above. <br />
* Without {{ic|--target}} or {{ic|--directory}} option, grub-install cannot determine for which firmware to install. In such cases {{ic|grub-install}} will print {{ic|source_dir does not exist. Please specify --target or --directory}}.<br />
* If after running grub-install you get {{ic|error: ''esp'' doesn't look like an EFI partition}}, then the partition is most likely not [[FAT32]] formatted.<br />
<br />
==== Create a GRUB entry in the firmware boot manager ====<br />
<br />
{{ic|grub-install}} automatically tries to create a menu entry in the boot manager. If it does not, then see [[UEFI#efibootmgr]] for instructions to use {{ic|efibootmgr}} to create a menu entry. However, the problem is likely to be that you have not booted your CD/USB in UEFI mode, as in [[UEFI#Create UEFI bootable USB from ISO]].<br />
<br />
As another example of creating a GRUB entry in the firmware boot manager, consider {{ic|efibootmgr -c}}. This assumes that {{ic|/dev/sda1}} is the EFI System Partition, and is mounted at {{ic|/boot/efi}}. Which are the default behavior of {{ic|efibootmgr}}. It creates a new boot option, called "Linux", and puts it at the top of the boot order list. Options may be passed to modify the default behavior. The default OS Loader is {{ic|\EFI\arch\grub.efi}}.<br />
<br />
==== Drop to rescue shell ====<br />
<br />
If GRUB loads but drops into the rescue shell with no errors, it can be due to one of these two reasons:<br />
<br />
* It may be because of a missing or misplaced {{ic|grub.cfg}}. This will happen if GRUB UEFI was installed with {{ic|--boot-directory}} and {{ic|grub.cfg}} is missing,<br />
* It also happens if the boot partition, which is hardcoded into the {{ic|grubx64.efi}} file, has changed.<br />
<br />
==== GRUB UEFI not loaded ====<br />
<br />
An example of a working UEFI:<br />
<br />
{{hc|# efibootmgr -u|<br />
BootCurrent: 0000<br />
Timeout: 3 seconds<br />
BootOrder: 0000,0001,0002<br />
Boot0000* GRUB HD(1,800,32000,23532fbb-1bfa-4e46-851a-b494bfe9478c)File(\EFI\GRUB\grubx64.efi)<br />
Boot0001* Shell HD(1,800,32000,23532fbb-1bfa-4e46-851a-b494bfe9478c)File(\shellx64.efi)<br />
Boot0002* Festplatte BIOS(2,0,00)P0: SAMSUNG HD204UI<br />
}}<br />
<br />
If the screen only goes black for a second and the next boot option is tried afterwards, according to [https://bbs.archlinux.org/viewtopic.php?pid=981560#p981560 this post], moving GRUB to the partition root can help. The boot option has to be deleted and recreated afterwards. The entry for GRUB should look like this then:<br />
<br />
Boot0000* GRUB HD(1,800,32000,23532fbb-1bfa-4e46-851a-b494bfe9478c)File(\grubx64.efi)<br />
<br />
==== Default/fallback boot path ====<br />
<br />
Some UEFI firmwares require a bootable file at a known location before they will show UEFI NVRAM boot entries. If this is the case, {{ic|grub-install}} will claim {{ic|efibootmgr}} has added an entry to boot GRUB, however the entry will not show up in the VisualBIOS boot order selector. The solution is to install GRUB at the default/fallback boot path:<br />
<br />
# grub-install --target=x86_64-efi --efi-directory=''esp'' '''--removable'''<br />
<br />
Alternatively you can move an already installed GRUB EFI executable to the default/fallback path:<br />
<br />
# mv ''esp''/EFI/grub ''esp''/EFI/BOOT<br />
# mv ''esp''/EFI/BOOT/grubx64.efi ''esp''/EFI/BOOT/BOOTX64.EFI<br />
<br />
=== Invalid signature ===<br />
<br />
If trying to boot Windows results in an "invalid signature" error, e.g. after reconfiguring partitions or adding additional hard drives, (re)move GRUB's device configuration and let it reconfigure:<br />
<br />
# mv /boot/grub/device.map /boot/grub/device.map-old<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
{{ic|grub-mkconfig}} should now mention all found boot options, including Windows. If it works, remove {{ic|/boot/grub/device.map-old}}.<br />
<br />
=== Boot freezes ===<br />
<br />
If booting gets stuck without any error message after GRUB loading the kernel and the initial ramdisk, try removing the {{ic|add_efi_memmap}} kernel parameter.<br />
<br />
=== Arch not found from other OS ===<br />
<br />
Some have reported that other distributions may have trouble finding Arch Linux automatically with {{ic|os-prober}}. If this problem arises, it has been reported that detection can be improved with the presence of {{ic|/etc/lsb-release}}. This file and updating tool is available with the package {{Pkg|lsb-release}}.<br />
<br />
=== Warning when installing in chroot ===<br />
<br />
When installing GRUB on a LVM system in a chroot environment (e.g. during system installation), you may receive warnings like<br />
<br />
/run/lvm/lvmetad.socket: connect failed: No such file or directory<br />
<br />
or<br />
<br />
WARNING: failed to connect to lvmetad: No such file or directory. Falling back to internal scanning.<br />
<br />
This is because {{ic|/run}} is not available inside the chroot. These warnings will not prevent the system from booting, provided that everything has been done correctly, so you may continue with the installation.<br />
<br />
=== GRUB loads slowly ===<br />
<br />
GRUB can take a long time to load when disk space is low. Check if you have sufficient free disk space on your {{ic|/boot}} or {{ic|/}} partition when you are having problems.<br />
<br />
=== error: unknown filesystem ===<br />
<br />
GRUB may output {{ic|error: unknown filesystem}} and refuse to boot for a few reasons. If you are certain that all [[UUID]]s are correct and all filesystems are valid and supported, it may be because your [[#GUID Partition Table (GPT) specific instructions|BIOS Boot Partition]] is located outside the first 2 TiB of the drive [https://bbs.archlinux.org/viewtopic.php?id=195948]. Use a partitioning tool of your choice to ensure this partition is located fully within the first 2 TiB, then reinstall and reconfigure GRUB.<br />
<br />
This error might also be caused by an [[ext4]] filesystem having unsupported features set:<br />
* {{ic|large_dir}} - unsupported.<br />
* {{ic|metadata_csum_seed}} - will be supported in GRUB 2.11 ([https://git.savannah.gnu.org/cgit/grub.git/commit/?id=7fd5feff97c4b1f446f8fcf6d37aca0c64e7c763 commit]).<br />
<br />
{{Warning|Make sure to check GRUB support for new [[file system]] features before you enable them on your {{ic|/boot}} file system.}}<br />
<br />
=== grub-reboot not resetting ===<br />
<br />
GRUB seems to be unable to write to root Btrfs partitions [https://bbs.archlinux.org/viewtopic.php?id=166131]. If you use grub-reboot to boot into another entry it will therefore be unable to update its on-disk environment. Either run grub-reboot from the other entry (for example when switching between various distributions) or consider a different file system. You can reset a "sticky" entry by executing {{ic|grub-editenv create}} and setting {{ic|1=GRUB_DEFAULT=0}} in your {{ic|/etc/default/grub}} (do not forget {{ic|grub-mkconfig -o /boot/grub/grub.cfg}}).<br />
<br />
=== Old Btrfs prevents installation ===<br />
<br />
If a drive is formatted with Btrfs without creating a partition table (eg. /dev/sdx), then later has partition table written to, there are parts of the BTRFS format that persist. Most utilities and OS's do not see this, but GRUB will refuse to install, even with --force<br />
<br />
# grub-install: warning: Attempting to install GRUB to a disk with multiple partition labels. This is not supported yet..<br />
# grub-install: error: filesystem `btrfs' does not support blocklists.<br />
<br />
You can zero the drive, but the easy solution that leaves your data alone is to erase the Btrfs superblock with {{ic|wipefs -o 0x10040 /dev/sdx}}<br />
<br />
=== Windows 8/10 not found ===<br />
<br />
A setting in Windows 8/10 called "Hiberboot", "Hybrid Boot" or "Fast Boot" can prevent the Windows partition from being mounted, so {{ic|grub-mkconfig}} will not find a Windows install. Disabling Hiberboot in Windows will allow it to be added to the GRUB menu.<br />
<br />
=== GRUB rescue and encrypted /boot ===<br />
<br />
When using an [[#Encrypted /boot|encrypted /boot]], and you fail to input a correct password, you will be dropped in grub-rescue prompt.<br />
<br />
This grub-rescue prompt has limited capabilities. Use the following commands to complete the boot:<br />
{{bc|<br />
grub rescue> cryptomount <partition><br />
grub rescue> insmod normal<br />
grub rescue> normal<br />
}}<br />
<br />
See [https://blog.stigok.com/2017/12/29/decrypt-and-mount-luks-disk-from-grub-rescue-mode.html this blog post]{{Dead link|2023|04|23|status=404}} for a better description.<br />
<br />
=== GRUB is installed but the menu is not shown at boot ===<br />
<br />
Check {{ic|/etc/default/grub}} if {{ic|GRUB_TIMEOUT}} is set to {{ic|0}}, in which case set it to a positive number: it sets the number of seconds before the default GRUB entry is loaded. Also check if {{ic|GRUB_TIMEOUT_STYLE}} is set to {{ic|hidden}} and set it to {{ic|menu}}, so that the menu will be shown by default. Then [[#Generate the main configuration file|regenerate the main configuration file]] and reboot to check if it worked.<br />
<br />
If it does not work, there may be incompatibility problems with the graphical terminal. Set {{ic|GRUB_TERMINAL_OUTPUT}} to {{ic|console}} in {{ic|/etc/default/grub}} to disable the GRUB graphical terminal.<br />
<br />
== See also ==<br />
<br />
* [[Wikipedia:GNU GRUB]]<br />
* [https://www.gnu.org/software/grub/manual/grub.html Official GRUB Manual]<br />
* [https://help.ubuntu.com/community/Grub2 Ubuntu wiki page for GRUB]<br />
* [https://help.ubuntu.com/community/UEFIBooting GRUB wiki page describing steps to compile for UEFI systems]<br />
* [[Wikipedia:BIOS Boot partition]]<br />
* [https://web.archive.org/web/20160424042444/http://members.iinet.net/~herman546/p20/GRUB2%20Configuration%20File%20Commands.html#Editing_etcgrub.d05_debian_theme How to configure GRUB]</div>Recolichttps://wiki.archlinux.org/index.php?title=QEMU&diff=802577QEMU2024-03-08T08:32:06Z<p>Recolic: /* Booting in UEFI mode */ secure boot instruction is not complete. It only works for q35 machine type, doesn't work for default machine.</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:QEMU]]<br />
[[es:QEMU]]<br />
[[fr:QEMU]]<br />
[[ja:QEMU]]<br />
[[zh-hans:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [https://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu-full}} package (or {{Pkg|qemu-base}} for the version without GUI and {{Pkg|qemu-desktop}} for the version with only x86 emulation by default) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
Alternatively, {{Pkg|qemu-user-static}} exists as a usermode and static variant.<br />
<br />
=== QEMU variants ===<br />
<br />
QEMU is offered in several variants suited for different use cases.<br />
<br />
As a first classification, QEMU is offered in full-system and usermode emulation modes:<br />
<br />
; Full-system emulation<br />
: In this mode, QEMU emulates a full system, including one or several processors and various peripherals. It is more accurate but slower, and does not require the emulated OS to be Linux.<br />
: QEMU commands for full-system emulation are named {{ic|qemu-system-''target_architecture''}}, e.g. {{ic|qemu-system-x86_64}} for emulating [[Wikipedia:x86_64|x86_64]] CPUs, {{ic|qemu-system-i386}} for Intel [[Wikipedia:i386|32-bit x86]] CPUs, {{ic|qemu-system-arm}} for [[Wikipedia:ARM architecture family#32-bit architecture|ARM (32 bits)]], {{ic|qemu-system-aarch64}} for [[Wikipedia:AArch64|ARM64]], etc.<br />
: If the target architecture matches the host CPU, this mode may still benefit from a significant speedup by using a hypervisor like [[#Enabling KVM|KVM]] or Xen.<br />
; [https://www.qemu.org/docs/master/user/main.html Usermode emulation]<br />
: In this mode, QEMU is able to invoke a Linux executable compiled for a (potentially) different architecture by leveraging the host system resources. There may be compatibility issues, e.g. some features may not be implemented, dynamically linked executables will not work out of the box (see [[#Chrooting into arm/arm64 environment from x86_64]] to address this) and only Linux is supported (although [https://wiki.winehq.org/Emulation Wine may be used] for running Windows executables).<br />
: QEMU commands for usermode emulation are named {{ic|qemu-''target_architecture''}}, e.g. {{ic|qemu-x86_64}} for emulating 64-bit CPUs.<br />
<br />
QEMU is offered in dynamically-linked and statically-linked variants:<br />
<br />
; Dynamically-linked (default): {{ic|qemu-*}} commands depend on the host OS libraries, so executables are smaller.<br />
; Statically-linked: {{ic|qemu-*}} commands can be copied to any Linux system with the same architecture.<br />
<br />
In the case of Arch Linux, full-system emulation is offered as:<br />
<br />
; Non-headless (default): This variant enables GUI features that require additional dependencies (like SDL or GTK).<br />
; Headless: This is a slimmer variant that does not require GUI (this is suitable e.g. for servers).<br />
<br />
Note that headless and non-headless versions install commands with the same name (e.g. {{ic|qemu-system-x86_64}}) and thus cannot be both installed at the same time.<br />
<br />
=== Details on packages available in Arch Linux ===<br />
<br />
* The {{Pkg|qemu-desktop}} package provides the {{ic|x86_64}} architecture emulators for full-system emulation ({{ic|qemu-system-x86_64}}). The {{Pkg|qemu-emulators-full}} package provides the {{ic|x86_64}} usermode variant ({{ic|qemu-x86_64}}) and also for the rest of supported architectures it includes both full-system and usermode variants (e.g. {{ic|qemu-system-arm}} and {{ic|qemu-arm}}).<br />
* The headless versions of these packages (only applicable to full-system emulation) are {{Pkg|qemu-base}} ({{ic|x86_64}}-only) and {{Pkg|qemu-emulators-full}} (rest of architectures).<br />
* Full-system emulation can be expanded with some QEMU modules present in separate packages: {{Pkg|qemu-block-gluster}}, {{Pkg|qemu-block-iscsi}} and {{Pkg|qemu-guest-agent}}.<br />
* {{Pkg|qemu-user-static}} provides a usermode and static variant for all target architectures supported by QEMU. The installed QEMU commands are named {{ic|qemu-''target_architecture''-static}}, for example, {{ic|qemu-x86_64-static}} for intel 64-bit CPUs.<br />
<br />
{{Note|At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.}}<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
<br />
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is <br />
explicitly told to preallocate. See {{man|1|qemu-img|NOTES}}.}} <br />
<br />
{{Tip|See [[Wikibooks:QEMU/Images]] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GiB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images. Can be specified in option nocow for qcow2 format when creating image: {{bc|1=$ qemu-img create -f qcow2 ''image_file'' -o nocow=on 4G}}}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU virtual machine as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GiB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. <br />
<br />
===== Shrinking an image =====<br />
<br />
When shrinking a disk image, you must first reduce the allocated file systems and partition sizes using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly. For a Windows guest, this can be performed from the "create and format hard disk partitions" control panel.<br />
<br />
{{Warning|Proceeding to shrink the disk image without reducing the guest partition sizes will result in data loss.}}<br />
<br />
Then, to decrease image space by 10 GiB, run:<br />
<br />
$ qemu-img resize --shrink ''disk_image'' -10G<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disc, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disc, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system ===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MiB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* When running QEMU in headless mode, it starts a local VNC server on port 5900 per default. You can use [[TigerVNC]] to connect to the guest OS: {{ic|vncviewer :5900}}<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.<br />
}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
Usually, if an option has many possible values, you can use<br />
<br />
$ qemu-system-x86_64 ''option'' ''help''<br />
<br />
to list all possible values. If it supports properties, you can use<br />
<br />
$ qemu-system-x86_64 ''option'' ''value,help''<br />
<br />
to list all available properties.<br />
<br />
For example:<br />
$ qemu-system-x86_64 -machine help<br />
$ qemu-system-x86_64 -machine q35,help<br />
$ qemu-system-x86_64 -device help<br />
$ qemu-system-x86_64 -device qxl,help<br />
<br />
You can use these methods and the {{man|1|qemu}} documentation to understand the options used in follow sections.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM (''Kernel-based Virtual Machine'') full virtualization must be supported by your Linux kernel and your hardware, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-accel kvm}} to the additional start options. To check if KVM is enabled for a running virtual machine, enter the [[#QEMU monitor]] and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} or the {{ic|-accel kvm}} option.<br />
* CPU model {{ic|host}} requires KVM.<br />
* If you start your virtual machine with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 or Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35 -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI passthrough is required.<br />
}}<br />
<br />
=== Booting in UEFI mode ===<br />
<br />
The default firmware used by QEMU is [https://www.coreboot.org/SeaBIOS SeaBIOS], which is a Legacy BIOS implementation. QEMU uses {{ic|/usr/share/qemu/bios-256k.bin}} (provided by the {{Pkg|seabios}} package) as a default read-only (ROM) image. You can use the {{ic|-bios}} argument to select another firmware file. However, UEFI requires writable memory to work properly, so you need to emulate [https://wiki.qemu.org/Features/PC_System_Flash PC System Flash] instead.<br />
<br />
[https://github.com/tianocore/tianocore.github.io/wiki/OVMF OVMF] is a TianoCore project to enable UEFI support for Virtual Machines. It can be [[install]]ed with the {{Pkg|edk2-ovmf}} package.<br />
<br />
There are two ways to use OVMF as a firmware. The first is to copy {{ic|/usr/share/edk2/x64/OVMF.4m.fd}}, make it writable and use as a pflash drive:<br />
<br />
-drive if=pflash,format=raw,file=''/copy/of/OVMF.4m.fd''<br />
<br />
All changes to the UEFI settings will be saved directly to this file.<br />
<br />
Another and more preferable way is to split OVMF into two files. The first one will be read-only and store the firmware executable, and the second one will be used as a writable variable store. The advantage is that you can use the firmware file directly without copying, so it will be updated automatically by [[pacman]].<br />
<br />
Use {{ic|/usr/share/edk2/x64/OVMF_CODE.4m.fd}} as a first read-only pflash drive. Copy {{ic|/usr/share/edk2/x64/OVMF_VARS.4m.fd}}, make it writable and use as a second writable pflash drive:<br />
<br />
-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2/x64/OVMF_CODE.4m.fd \<br />
-drive if=pflash,format=raw,file=''/copy/of/OVMF_VARS.4m.fd''<br />
<br />
If secure boot is wanted, use q35 machine type and replace {{ic|/usr/share/edk2/x64/OVMF_CODE.4m.fd}} with {{ic|/usr/share/edk2/x64/OVMF_CODE.secboot.4m.fd}}.<br />
<br />
=== Trusted Platform Module emulation ===<br />
<br />
QEMU can emulate [[Trusted Platform Module]], which is required by some systems such as Windows 11 (which requires TPM 2.0).<br />
<br />
[[Install]] the {{Pkg|swtpm}} package, which provides a software TPM implementation. Create some directory for storing TPM data ({{ic|''/path/to/mytpm''}} will be used as an example). Run this command to start the emulator:<br />
<br />
$ swtpm socket --tpm2 --tpmstate dir=''/path/to/mytpm'' --ctrl type=unixio,path=''/path/to/mytpm/swtpm-sock''<br />
<br />
{{ic|''/path/to/mytpm/swtpm-sock''}} will be created by ''swtpm'': this is a UNIX socket to which QEMU will connect. You can put it in any directory.<br />
<br />
By default, ''swtpm'' starts a TPM version 1.2 emulator. The {{ic|--tpm2}} option enables TPM 2.0 emulation.<br />
<br />
Finally, add the following options to QEMU:<br />
<br />
-chardev socket,id=chrtpm,path=''/path/to/mytpm/swtpm-sock'' \<br />
-tpmdev emulator,id=tpm0,chardev=chrtpm \<br />
-device tpm-tis,tpmdev=tpm0<br />
<br />
and TPM will be available inside the virtual machine. After shutting down the virtual machine, ''swtpm'' will be automatically terminated.<br />
<br />
See [https://qemu-project.gitlab.io/qemu/specs/tpm.html the QEMU documentation] for more information. <br />
<br />
If guest OS still doesn't recognize the TPM device, try to adjust ''CPU Models and Topology'' options. It might cause problem.<br />
<br />
== Sharing data between host and guest ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network block device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
{{Note|QEMU's port forwarding is IPv4-only. IPv6 port forwarding is not implemented and the last patches were proposed in 2018.[https://lore.kernel.org/qemu-devel/1540512223-21199-1-git-send-email-max7255@yandex-team.ru/T/#u]}}<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to an SSH server running on the guest.<br />
<br />
For example, to bind port 60022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@127.0.0.1 -p 60022<br />
<br />
You can use [[SSHFS]] to mount the guest's file system at the host for shared read and write access.<br />
<br />
To forward several ports, you just repeat the {{ic|hostfwd}} in the {{ic|-nic}} argument, e.g. for VNC's port:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22,hostfwd=tcp::5900-:5900<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] on the host with an automatically generated {{ic|smb.conf}} file located in {{ic|/tmp/qemu-smb.''random_string''}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal [[Samba]] service on the host, which the guest can also access if you have set up shares on it.<br />
<br />
Only a single directory can be set as shared with the option {{ic|1=smb=}}, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.<br />
<br />
''Samba'' must be installed on the host. To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 -nic user,id=nic0,smb=''shared_dir_path'' ''disk_image''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled]{{Dead link|2023|05|06|status=domain name not resolved}} and that a firewall does not block [https://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
* If you use [[#Tap networking with QEMU]], use {{ic|1=-device virtio-net,netdev=vmnic -netdev user,id=vmnic,smb=''shared_dir_path''}} to get SMB.<br />
}}<br />
<br />
One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:<br />
<br />
#!/bin/sh<br />
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')<br />
echo "[global]<br />
allow insecure wide links = yes<br />
[qemu]<br />
follow symlinks = yes<br />
wide links = yes<br />
acl allow execute always = yes" >> "$conf"<br />
# in case the change is not detected automatically:<br />
smbcontrol --configfile="$conf" "$pid" reload-config<br />
<br />
This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:<br />
<br />
echo "[''myshare'']<br />
path=''another_path''<br />
read only=no<br />
guest ok=yes<br />
force user=''username''" >> $conf<br />
<br />
This share will be available on the guest as {{ic|\\10.0.2.4\''myshare''}}.<br />
<br />
=== Using filesystem passthrough and VirtFS ===<br />
<br />
See the [https://wiki.qemu.org/Documentation/9psetup QEMU documentation].<br />
<br />
=== Host file sharing with virtiofsd ===<br />
<br />
{{Style|See [[Help:Style/Formatting and punctuation]].}}<br />
<br />
virtiofsd is shipped with QEMU package. Documentation is available [https://qemu-stsquad.readthedocs.io/en/docs-next/tools/virtiofsd.html online]{{Dead link|2023|05|06|status=404}} or {{ic|/usr/share/doc/qemu/qemu/tools/virtiofsd.html}} on local file system with {{Pkg|qemu-docs}} installed.<br />
<br />
Add user that runs qemu to the 'kvm' [[user group]], because it needs to access the virtiofsd socket. You might have to logout for change to take effect.<br />
<br />
{{Accuracy|Running services as root is not secure. Also the process should be wrapped in a systemd service.}}<br />
<br />
Start as virtiofsd as root:<br />
<br />
# /usr/lib/virtiofsd --socket-path=/var/run/qemu-vm-001.sock --shared-dir /tmp/vm-001 --cache always<br />
<br />
where<br />
<br />
* {{ic|/var/run/qemu-vm-001.sock}} is a socket file,<br />
* {{ic|/tmp/vm-001}} is a shared directory between the host and the guest virtual machine.<br />
<br />
The created socket file has root only access permission. Give group kvm access to it with:<br />
<br />
# chgrp kvm qemu-vm-001.sock; chmod g+rxw qemu-vm-001.sock<br />
<br />
Add the following configuration options when starting the virtual machine:<br />
<br />
-object memory-backend-memfd,id=mem,size=4G,share=on \<br />
-numa node,memdev=mem \<br />
-chardev socket,id=char0,path=/var/run/qemu-vm-001.sock \<br />
-device vhost-user-fs-pci,chardev=char0,tag=myfs<br />
<br />
where<br />
<br />
{{Expansion|Explain the remaining options (or remove them if they are not necessary).}}<br />
<br />
* {{ic|1=size=4G}} shall match size specified with {{ic|-m 4G}} option,<br />
* {{ic|/var/run/qemu-vm-001.sock}} points to socket file started earlier,<br />
<br />
{{Style|The section should not be specific to Windows.}}<br />
<br />
Remember, that guest must be configured to enable sharing. For Windows there are [https://virtio-fs.gitlab.io/howto-windows.html instructions]. Once configured, Windows will have the {{ic|Z:}} drive mapped automatically with shared directory content.<br />
<br />
Your Windows 10 guest system is properly configured if it has:<br />
<br />
* VirtioFSSService windows service,<br />
* WinFsp.Launcher windows service,<br />
* VirtIO FS Device driver under "System devices" in Windows "Device Manager".<br />
<br />
If the above installed and {{ic|Z:}} drive is still not listed, try repairing "Virtio-win-guest-tools" in Windows ''Add/Remove programs''.<br />
<br />
=== Mounting a partition of the guest on the host ===<br />
<br />
It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.<br />
<br />
The procedure to mount the drive on the host depends on the type of qemu image, ''raw'' or ''qcow2''. We detail thereafter the steps to mount a drive in the two formats in [[#Mounting a partition from a raw image]] and [[#Mounting a partition from a qcow2 image]]. For the full documentation see [[Wikibooks:QEMU/Images#Mounting an image on the host]].<br />
<br />
{{Warning|You must unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== Mounting a partition from a raw image ====<br />
<br />
It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.<br />
<br />
===== With manually specifying byte offset =====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
===== With loop module autodetecting partitions =====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel modules#Manual module handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0p''X''}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
===== With kpartx =====<br />
<br />
''kpartx'' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
==== Mounting a partition from a qcow2 image ====<br />
<br />
We will use {{ic|qemu-nbd}}, which lets use the NBD (''network block device'') protocol to share the disk image.<br />
<br />
First, we need the ''nbd'' module loaded:<br />
<br />
# modprobe nbd max_part=16<br />
<br />
Then, we can share the disk and create the device entries:<br />
<br />
# qemu-nbd -c /dev/nbd0 ''/path/to/image.qcow2''<br />
<br />
Discover the partitions:<br />
<br />
# partprobe /dev/nbd0<br />
<br />
''fdisk'' can be used to get information regarding the different partitions in {{ic|''nbd0''}}:<br />
<br />
{{hc|# fdisk -l /dev/nbd0|2=<br />
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors<br />
Units: sectors of 1 * 512 = 512 bytes<br />
Sector size (logical/physical): 512 bytes / 512 bytes<br />
I/O size (minimum/optimal): 512 bytes / 512 bytes<br />
Disklabel type: dos<br />
Disk identifier: 0xa6a4d542<br />
<br />
Device Boot Start End Sectors Size Id Type<br />
/dev/nbd0p1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT<br />
/dev/nbd0p2 1026048 52877311 51851264 24.7G 7 HPFS/NTFS/exFAT}}<br />
<br />
Then mount any partition of the drive image, for example the partition 2:<br />
<br />
# mount /dev/nbd0'''p2''' ''mountpoint''<br />
<br />
After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:<br />
<br />
# umount ''mountpoint''<br />
# qemu-nbd -d /dev/nbd0<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you must either change the owner of the partition's device file to that user, add that user to the ''disk'' group, or use [[ACL]] for more fine-grained access control.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a boot loader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by: [[#Specifying kernel and initrd manually]], [[#Simulating a virtual disk with MBR]], [[#Using the device-mapper]], [[#Using a linear RAID]] or [[#Using a Network Block Device]].<br />
<br />
==== Specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing boot loaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulating a virtual disk with MBR ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a boot loader such as GRUB.<br />
<br />
For the following, suppose you have a plain, unmounted {{ic|/dev/hda''N''}} partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes {{ic|/dev/hda''N''}} to the virtual machine.<br />
<br />
A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk [https://www.virtualbox.org/manual/ch09.html#rawdisk created by]<br />
<br />
$ VBoxManage internalcommands createrawvmdk -filename ''/path/to/file.vmdk'' -rawdisk /dev/hda<br />
<br />
will be rejected by QEMU with the error message<br />
<br />
Unsupported image type 'partitionedDevice'<br />
<br />
Note that {{ic|VBoxManage}} creates two files, {{ic|''file.vmdk''}} and {{ic|''file-pt.vmdk''}}, the latter being a copy of the MBR, to which the text file {{ic|file.vmdk}} points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.<br />
<br />
===== Using the device-mapper =====<br />
<br />
A method that is similar to the use of a VMDK descriptor file uses the [https://docs.kernel.org/admin-guide/device-mapper/index.html device-mapper] to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=2048<br />
<br />
Here, a 1 MiB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:<br />
<br />
{{hc|# losetup --show -f ''/path/to/mbr''|/dev/loop0}}<br />
<br />
In this example, the resulting device is {{ic|/dev/loop0}}. The device mapper is now used to join the MBR and the partition:<br />
<br />
# echo "0 2048 linear /dev/loop0 0<br />
2048 `blockdev --getsz /dev/hda''N''` linear /dev/hda''N'' 0" | dmsetup create qemu<br />
<br />
The resulting {{ic|/dev/mapper/qemu}} is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in {{ic|''/path/to/mbr''}}).<br />
<br />
The following setup is an example where the position of {{ic|/dev/hda''N''}} on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:<br />
<br />
# dd if=/dev/hda count=1 of=''/path/to/mbr''<br />
# loop=`losetup --show -f ''/path/to/mbr''`<br />
# start=`blockdev --report /dev/hda''N'' | tail -1 | awk '{print $5}'`<br />
# size=`blockdev --getsz /dev/hda''N''`<br />
# disksize=`blockdev --getsz /dev/hda`<br />
# echo "0 1 linear $loop 0<br />
1 $((start-1)) zero<br />
$start $size linear /dev/hda''N'' 0<br />
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu<br />
<br />
The table provided as standard input to {{ic|dmsetup}} has a similar format as the table in a VDMK descriptor file produced by {{ic|VBoxManage}} and can alternatively be loaded from a file with {{ic|dmsetup create qemu --table ''table_file''}}. To the virtual machine, only {{ic|/dev/hda''N''}} is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for {{ic|/dev/mapper/qemu}} with {{ic|dmsetup table qemu}} (use {{ic|udevadm info -rq name /sys/dev/block/''major'':''minor''}} to translate {{ic|''major'':''minor''}} to the corresponding {{ic|/dev/''blockdevice''}} name). Use {{ic|dmsetup remove qemu}} and {{ic|losetup -d $loop}} to delete the created devices.<br />
<br />
A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from {{ic|/dev/hda''N''}} instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.<br />
<br />
===== Using a linear RAID =====<br />
<br />
{{Out of date|[[Wikipedia:Cylinder-head-sector|CHS]] has been obsolete for decades.}}<br />
<br />
You can also do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: <br />
<br />
First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KiB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hda''N''}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kibibyte-roundable offsets (such as 31.5 KiB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any boot loader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Using a Network Block Device =====<br />
<br />
With [https://docs.kernel.org/admin-guide/blockdev/nbd.html Network Block Device], Linux can use a remote server as one of its block device. You may use {{ic|nbd-server}} (from the {{Pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
{{bc|1=<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
}}<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
=== Using an entire physical disk device inside the virtual machine ===<br />
<br />
{{Style|Duplicates [[#Using any real partition as the single primary partition of a hard disk image]], libvirt instructions do not belong to this page.}}<br />
<br />
You may have a second disk with a different OS (like Windows) on it and may want to gain the ability to also boot it inside a virtual machine.<br />
Since the disk access is raw, the disk will perform quite well inside the virtual machine.<br />
<br />
==== Windows virtual machine boot prerequisites ====<br />
<br />
Be sure to install the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/ virtio drivers] inside the OS on that disk before trying to boot it in the virtual machine.<br />
For Win 7 use version [https://askubuntu.com/questions/1310440/using-virtio-win-drivers-with-win7-sp1-x64 0.1.173-4].<br />
Some singular drivers from newer virtio builds may be used on Win 7 but you will have to install them manually via device manager.<br />
For Win 10 you can use the latest virtio build.<br />
<br />
===== Set up the windows disk interface drivers =====<br />
<br />
You may get a {{ic|0x0000007B}} bluescreen when trying to boot the virtual machine. This means Windows can not access the drive during the early boot stage because the disk interface driver it would need for that is not loaded / is set to start manually.<br />
<br />
The solution is to [https://superuser.com/a/1032769 enable these drivers to start at boot].<br />
<br />
In {{ic|HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services}}, find the folders {{ic|aliide, amdide, atapi, cmdide, iastor (may not exist), iastorV, intelide, LSI_SAS, msahci, pciide and viaide}}.<br />
Inside each of those, set all their "start" values to 0 in order to enable them at boot.<br />
If your drive is a PCIe NVMe drive, also enable that driver (should it exist).<br />
<br />
==== Find the unique path of your disk ====<br />
<br />
Run {{ic|ls /dev/disk/by-id/}}: tere you pick out the ID of the drive you want to insert into the virtual machine, for example {{ic|ata-TS512GMTS930L_C199211383}}.<br />
Now add that ID to {{ic|/dev/disk/by-id/}} so you get {{ic|/dev/disk/by-id/ata-TS512GMTS930L_C199211383}}.<br />
That is the unique path to that disk.<br />
<br />
==== Add the disk in QEMU CLI ====<br />
<br />
In QEMU CLI that would probably be:<br />
<br />
{{ic|1=-drive file=/dev/disk/by-id/ata-TS512GMTS930L_C199211383,format=raw,media=disk}}<br />
<br />
Just modify {{ic|file{{=}}}} to be the unique path of your drive.<br />
<br />
==== Add the disk in libvirt ====<br />
<br />
In libvirt XML that translates to<br />
<br />
{{hc|$ virsh edit ''vmname''|<nowiki><br />
...<br />
<disk type="block" device="disk"><br />
<driver name="qemu" type="raw" cache="none" io="native"/><br />
<source dev="/dev/disk/by-id/ata-TS512GMTS930L_C199211383"/><br />
<target dev="sda" bus="sata"/><br />
<address type="drive" controller="0" bus="0" target="0" unit="0"/><br />
</disk><br />
...<br />
</nowiki>}}<br />
<br />
Just modify "source dev" to be the unique path of your drive.<br />
<br />
==== Add the disk in virt-manager ====<br />
<br />
When creating a virtual machine, select "import existing drive" and just paste that unique path.<br />
If you already have the virtual machine, add a device, storage, then select or create custom storage.<br />
Now paste the unique path.<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [https://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
* Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
* Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
* Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|2=<br />
#!/usr/bin/env python<br />
# usage: qemu-mac-hasher.py <VMName><br />
<br />
import sys<br />
import zlib<br />
<br />
crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8")))).replace("x", "")[-8:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))<br />
}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{Note|ICMPv6 will not work, as support for it is not implemented: {{ic|Slirp: external icmpv6 not supported yet}}. [[Ping]]ing an IPv6 address will not work.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
{{Tip|<br />
* To use the virtio driver with user-mode networking, the option is: {{ic|1=-nic user,model=virtio-net-pci}}.<br />
* You can isolate user-mode networking from the host and the outside world by adding {{ic|1=restrict=y}}, for example: {{ic|1=-net user,restrict=y}}<br />
}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See [https://web.archive.org/web/20160222161955/http://www.linux-kvm.com:80/content/how-maximize-virtio-net-performance-vhost-net] for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
<br />
{{bc|1=<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254<br />
}}<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|<br />
* See [[Network bridge]] for information on creating bridge.<br />
* See https://wiki.qemu.org/Features/HelperNetworking for more information on QEMU's network helper.<br />
}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''br0''<br />
allow ''br1''<br />
...}}<br />
<br />
Make sure {{ic|/etc/qemu/}} has {{ic|755}} [[permissions]]. [https://gitlab.com/qemu-project/qemu/-/issues/515 QEMU issues] and [https://www.gns3.com/community/discussions/gns3-cannot-work-with-qemu GNS3 issues] may arise if this is not the case.<br />
<br />
Now start the virtual machine; the most basic usage to run QEMU with the default network helper and default bridge {{ic|br0}}:<br />
<br />
$ qemu-system-x86_64 -nic bridge ''[...]''<br />
<br />
Using the bridge {{ic|br1}} and the virtio driver:<br />
<br />
$ qemu-system-x86_64 -nic bridge,br=''br1'',model=virtio-net-pci ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [https://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Optionally create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name. In the {{ic|run-qemu}} script below, {{ic|br0}} is set up if not listed, as it is assumed that by default the host is not accessing network via the bridge.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
: '<br />
e.g. with img created via:<br />
qemu-img create -f qcow2 example.img 90G<br />
run-qemu -cdrom archlinux-x86_64.iso -boot order=d -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4<br />
run-qemu -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4<br />
'<br />
<br />
nicbr0() {<br />
sudo ip link set dev $1 promisc on up &> /dev/null<br />
sudo ip addr flush dev $1 scope host &>/dev/null<br />
sudo ip addr flush dev $1 scope site &>/dev/null<br />
sudo ip addr flush dev $1 scope global &>/dev/null<br />
sudo ip link set dev $1 master br0 &> /dev/null<br />
}<br />
_nicbr0() {<br />
sudo ip link set $1 promisc off down &> /dev/null<br />
sudo ip link set dev $1 nomaster &> /dev/null<br />
}<br />
<br />
HASBR0="$( ip link show | grep br0 )"<br />
if [ -z $HASBR0 ] ; then<br />
ROUTER="192.168.1.1"<br />
SUBNET="192.168.1."<br />
NIC=$(ip link show | grep en | grep 'state UP' | head -n 1 | cut -d":" -f 2 | xargs)<br />
IPADDR=$(ip addr show | grep -o "inet $SUBNET\([0-9]*\)" | cut -d ' ' -f2)<br />
sudo ip link add name br0 type bridge &> /dev/null<br />
sudo ip link set dev br0 up<br />
sudo ip addr add $IPADDR/24 brd + dev br0<br />
sudo ip route del default &> /dev/null<br />
sudo ip route add default via $ROUTER dev br0 onlink<br />
nicbr0 $NIC<br />
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
fi<br />
<br />
USERID=$(whoami)<br />
precreationg=$(ip tuntap list | cut -d: -f1 | sort)<br />
sudo ip tuntap add user $USERID mode tap<br />
postcreation=$(ip tuntap list | cut -d: -f1 | sort)<br />
TAP=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
nicbr0 $TAP<br />
<br />
printf -v MACADDR "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr=$MACADDR,model=virtio \<br />
-net tap,ifname=$TAP,script=no,downscript=no,vhost=on \<br />
$@<br />
<br />
_nicbr0 $TAP<br />
sudo ip link set dev $TAP down &> /dev/null<br />
sudo ip tuntap del $TAP mode tap<br />
<br />
if [ -z $HASBR0 ] ; then<br />
_nicbr0 $NIC<br />
sudo ip addr del dev br0 $IPADDR/24 &> /dev/null<br />
sudo ip link set dev br0 down<br />
sudo ip link delete br0 type bridge &> /dev/null<br />
sudo ip route del default &> /dev/null<br />
sudo ip link set dev $NIC up<br />
sudo ip route add default via $ROUTER dev $NIC onlink &> /dev/null<br />
fi<br />
</nowiki>}}<br />
<br />
Then to launch a virtual machine, do something like this<br />
<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
<br />
In order to apply the parameters described above on boot, you will also need to load the br-netfilter module on boot. Otherwise, the parameters will not exist when sysctl will try to modify them.<br />
<br />
{{hc|/etc/modules-load.d/br_netfilter.conf|<nowiki><br />
br_netfilter<br />
</nowiki>}}<br />
<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [https://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel module#systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [https://wiki.virtualsquare.org/ the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[install]]ed via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be [[executable]]. <br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
==== Alternative method ====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the virtual machine with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [https://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|2=<br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for users in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|2=<br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you are using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net-pci,netdev=network0<br />
<br />
become:<br />
<br />
-nic tap,script=no,downscript=no,vhost=on,model=virtio-net-pci<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|1=model=}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|1=model=}}) are related with the device. The same parameters (for example, {{ic|1=smb=}}) are used. To completely disable the networking use {{ic|-nic none}}.<br />
<br />
See [https://qemu.weilnetz.de/doc/6.0/system/net.html QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphic card ==<br />
<br />
QEMU can emulate a standard graphic card text mode using {{ic|-display curses}} command line option. This allows to type text and see text output directly inside a text terminal. Alternatively, {{ic|-nographic}} serves a similar purpose.<br />
<br />
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor support|increase vga_memmb]].<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. It's mature, currently supporting only Linux guests with {{Pkg|mesa}} compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system, select this vga with {{ic|-device virtio-vga-gl}} and enable the OpenGL context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the SDL and GTK display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|# dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [https://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
== SPICE ==<br />
<br />
The [https://www.spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.<br />
<br />
=== Enabling SPICE support on the host ===<br />
<br />
The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
<br />
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device<br />
# {{ic|1=-spice port=5930,disable-ticketing=on}} set TCP port {{ic|5930}} for spice channels listening and allow client to connect without authentication{{Tip|Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system. It does not imply that packets are encapsulated and decapsulated to use the network and the related protocol. The sockets are identified solely by the inodes on the hard drive. It is therefore considered better for performance. Use instead {{ic|1=-spice unix=on,addr=/tmp/vm_spice.socket,disable-ticketing=on}}.}}<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.<br />
<br />
=== Connecting to the guest with a SPICE client ===<br />
<br />
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:<br />
<br />
* {{App|virt-viewer|SPICE client recommended by the protocol developers, a subset of the virt-manager project.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}<br />
* {{App|spice-gtk|SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.|https://www.spice-space.org/|{{Pkg|spice-gtk}}}}<br />
<br />
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [https://www.spice-space.org/download.html spice-space download].<br />
<br />
==== Manually running a SPICE client ====<br />
<br />
One way of connecting to a guest listening on Unix socket {{ic|/tmp/vm_spice.socket}} is to manually run the SPICE client using {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}, depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.<br />
<br />
{{Tip|<br />
To connect to the guest through SSH tunneling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}<br />
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.<br />
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.<br />
}}<br />
<br />
==== Running a SPICE client with QEMU ====<br />
<br />
QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the {{ic|-display spice-app}} parameter. This will use the system's default SPICE client as the viewer, determined by your [[XDG MIME Applications#mimeapps.list|mimeapps.list]] files.<br />
<br />
=== Enabling SPICE support on the guest ===<br />
<br />
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. (Refer to this [https://github.com/systemd/systemd/issues/18791 issue], until fixed, for workarounds to get this to work on non-GNOME desktops.)<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
* {{AUR|x-resize}}: Desktop environments other than GNOME do not react automatically when the SPICE client window is resized. This package uses a [[udev]] rule and [[xrandr]] to implement auto-resizing for all X11-based desktop environments and window managers.<br />
For guests under '''other operating systems''', refer to the ''Guest'' section in spice-space [https://www.spice-space.org/download.html download].<br />
<br />
=== Password authentication with SPICE ===<br />
<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
=== TLS encrypted communication with SPICE ===<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{Pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
== VNC ==<br />
<br />
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
=== Basic password authentication ===<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Creating an audio backend ===<br />
<br />
The {{ic|-audiodev}} flag sets the audio backend driver on the host and its options.<br />
<br />
To list availabe audio backend drivers:<br />
<br />
$ qemu-system-x86_64 -audiodev help<br />
<br />
Their optional settings are detailed in the {{man|1|qemu}} man page.<br />
<br />
At the bare minimum, one need to choose an audio backend and set an id, for [[PulseAudio]] for example:<br />
<br />
-audiodev pa,id=snd0<br />
<br />
=== Using the audio backend ===<br />
<br />
==== Intel HD Audio ====<br />
<br />
For Intel HD Audio emulation, add both controller and codec devices. To list the available Intel HDA Audio devices:<br />
<br />
$ qemu-system-x86_64 -device help | grep hda<br />
<br />
Add the audio controller:<br />
<br />
-device ich9-intel-hda<br />
<br />
Also, add the audio codec and map it to a host audio backend id:<br />
<br />
-device hda-output,audiodev=snd0<br />
<br />
==== Intel 82801AA AC97 ====<br />
<br />
For AC97 emulation just add the audio card device and map it to a host audio backend id:<br />
<br />
-device AC97,audiodev=snd0<br />
<br />
{{Note|<br />
* If the audiodev backend is not provided, QEMU looks up for it and adds it automatically, this only works for a single audiodev. For example {{ic|-device intel-hda -device hda-duplex}} will emulate {{ic|intel-hda}} on the guest using the default audiodev backend.<br />
* Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.<br />
}}<br />
<br />
==== VirtIO sound ====<br />
<br />
VirtIO sound is also available since QEMU 8.2.0. The usage is:<br />
<br />
-device virtio-sound-pci,audiodev=my_audiodev -audiodev alsa,id=my_audiodev<br />
<br />
More information can be found in [https://qemu-project.gitlab.io/qemu/system/devices/virtio-snd.html QEMU documentation].<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [https://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{ic|-drive}} for passing a disk image, with parameter {{ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if='''virtio'''<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -nic user,model='''virtio-net-pci'''<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an Arch Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{ic|virtio}}, {{ic|virtio_pci}}, {{ic|virtio_blk}}, {{ic|virtio_net}}, and {{ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and boot loader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [https://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
==== Virtio drivers for Windows ====<br />
<br />
Windows does not come with the virtio drivers. The latest and stable versions of the drivers are regularly built by Fedora, details on downloading the drivers are given on [https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md virtio-win on GitHub]. In the following sections we will mostly use the stable ISO file provided here: [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso virtio-win.iso]. Alternatively, use {{AUR|virtio-win}}.<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
The drivers need to be loaded during installation, the procedure is to load the ISO image with the virtio drivers in a cdrom device along with the primary disk device and the Windows ISO install media:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''disk_image'',index=0,media=disk,if=virtio \<br />
-drive file=''windows.iso'',index=2,media=cdrom \<br />
-drive file=''virtio-win.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, at some stage, the Windows installer will ask "Where do you want to install Windows?", it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option ''Load Drivers''.<br />
* Uncheck the box for ''Hide drivers that are not compatible with this computer's hardware''.<br />
* Click the browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and confirm.<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change existing Windows virtual machine to use virtio =====<br />
<br />
Modifying an existing Windows guest for booting from virtio disk requires that the virtio driver is loaded by the guest at boot time.<br />
We will therefore need to teach Windows to load the virtio driver at boot time before being able to boot a disk image in virtio mode.<br />
<br />
To achieve that, first create a new disk image that will be attached in virtio mode and trigger the search for the driver:<br />
<br />
$ qemu-img create -f qcow2 ''dummy.qcow2'' 1G<br />
<br />
Run the original Windows guest with the boot disk still in IDE mode, the fake disk in virtio mode and the driver ISO image.<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=ide -drive file=''dummy.qcow2'',if=virtio -cdrom virtio-win.iso<br />
<br />
Windows will detect the fake disk and look for a suitable driver. If it fails, go to ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1).<br />
<br />
Request Windows to boot in safe mode next time it starts up. This can be done using the ''msconfig.exe'' tool in Windows. In safe mode all the drivers will be loaded at boot time including the new virtio driver. Once Windows knows that the virtio driver is required at boot it will memorize it for future boot.<br />
<br />
Once instructed to boot in safe mode, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=virtio<br />
<br />
You should boot in safe mode with virtio driver loaded, you can now return to ''msconfig.exe'' disable safe mode boot and restart Windows.<br />
<br />
{{Note|If you encounter the blue screen of death using the {{ic|1=if=virtio}} parameter, it probably means the virtio disk driver is not installed or not loaded at boot time, reboot in safe mode and check your driver configuration.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-nic}} argument.<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=virtio -nic user,model=virtio-net-pci -cdrom virtio-win.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and do not forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still will not be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|1=<br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
# sed -ibak "s/ada/vtbd/g" /etc/fstab<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://www.qemu.org/docs/master/system/monitor.html official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
==== Graphical view ====<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports.<br />
<br />
==== Telnet ====<br />
<br />
To enable [[telnet]], run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
<br />
$ telnet 127.0.0.1 ''port''<br />
<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
==== UNIX socket ====<br />
<br />
Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{Pkg|socat}}, {{Pkg|nmap}} or {{Pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
Alternatively with {{Pkg|nmap}}:<br />
<br />
$ ncat -U /tmp/monitor.sock<br />
<br />
==== TCP ====<br />
<br />
You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{Pkg|openbsd-netcat}} or {{Pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
==== Standard I/O ====<br />
<br />
It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit all<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== QEMU machine protocol ==<br />
<br />
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].<br />
<br />
=== Start QMP ===<br />
<br />
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait<br />
<br />
Then one way to communicate with the QMP agent is to use [[netcat]]:<br />
<br />
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}<br />
<br />
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:<br />
<br />
{"execute": "qmp_capabilities"}<br />
<br />
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:<br />
<br />
{"execute": "query-commands"}<br />
<br />
=== Live merging of child image into parent image ===<br />
<br />
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:<br />
<br />
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}<br />
<br />
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.<br />
<br />
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:<br />
<br />
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}<br />
<br />
Until such a command is issued, the ''commit'' operation remains active.<br />
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.<br />
<br />
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}<br />
<br />
=== Live creation of a new snapshot ===<br />
<br />
To create a new snapshot out of a running image, run the command:<br />
<br />
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}<br />
<br />
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Apply [[#Enabling KVM]] for full virtualization.<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU rather than a more generic CPU.<br />
* Especially for Windows guests, enable [https://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}. See the [https://www.qemu.org/docs/master/system/i386/hyperv.html QEMU documentation] for more information and flags.<br />
* multiple cores can be assigned to the guest using the {{ic|-smp cores{{=}}x,threads{{=}}y,sockets{{=}}1,maxcpus{{=}}z}} option. The threads parameter is used to assign [https://www.tomshardware.com/reviews/simultaneous-multithreading-definition,5762.html SMT cores]. Leaving a physical core for QEMU, the hypervisor and the host system to operate unimpeded is highly beneficial.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* If supported by drivers in the guest operating system, use virtio for network and/or block devices, see [[#Installing virtio drivers]].<br />
* Use TAP devices instead of user-mode networking, see [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk or partition, you may want to disable the cache: {{bc|1=$ qemu-system-x86_64 -drive file=/dev/''disk'',if=virtio,'''cache=none'''}}<br />
* Use the native Linux AIO: {{bc|1=$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''}}<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time: {{bc|1=$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0}}<br />
<br />
See https://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== With systemd service ====<br />
<br />
To run QEMU virtual machines on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|2=<br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
ExecStart=/usr/bin/qemu-system-x86_64 -name %i -enable-kvm -m 512 -nographic $args<br />
ExecStop=/usr/bin/bash -c ${haltcmd}<br />
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
{{Note|This service will wait for the console port to be released, which means that the virtual machine has been shutdown, to graciously end.}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|args}} and {{ic|haltcmd}} set. Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|2=<br />
args="-hda /dev/vg0/vm1 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' {{!}} nc localhost 7100" # or netcat/ncat}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|2=<br />
args="-hda /srv/kvm/vm2 -serial telnet:localhost:7001,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="ssh powermanager@vm2 sudo poweroff"}}<br />
<br />
The description of the variables is the following:<br />
<br />
* {{ic|args}} - QEMU command line arguments to be used.<br />
* {{ic|haltcmd}} - Command to shut down a virtual machine safely. In the first example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the virtual machines are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. In the other example, SSH is used.<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the {{ic|lsusb}} command. For example:<br />
<br />
{{hc|$ lsusb|<br />
...<br />
Bus '''003''' Device '''007''': ID '''0781''':'''5406''' SanDisk Corp. Cruzer Micro U3<br />
}}<br />
<br />
The outputs in bold above will be useful to identify respectively the ''host_bus'' and ''host_addr'' or the ''vendor_id'' and ''product_id''.<br />
<br />
In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 1.1 USB 2 USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device qemu-xhci,id=xhci}} respectively and then attach the physical device to it with the option {{ic|1=-device usb-host,..}}. We will consider that ''controller_id'' is either {{ic|ehci}} or {{ic|xhci}} for the rest of this section.<br />
<br />
Then, there are two ways to connect to the USB of the host with qemu:<br />
<br />
# Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is: {{bc|1=-device usb-host,bus=''controller_id''.0,vendorid=0x''vendor_id'',productid=0x''product_id''}}Applied to the device used in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x'''0781''',productid=0x'''5406'''}}One can also add the {{ic|1=...,port=''port_number''}} setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one wants to add multiple USB devices to the virtual machine. Another option is to use the new {{ic|hostdevice}} property of {{ic|usb-host}} which is available since QEMU 5.1.0, the syntax is: {{bc|1=-device qemu-xhci,id=xhci -device usb-host,hostdevice=/dev/bus/usb/003/007}}<br />
# Attach whatever is connected to a given USB bus and address, the syntax is:{{bc|1=-device usb-host,bus=''controller_id''.0,hostbus=''host_bus'',host_addr=''host_addr''}}Applied to the bus and the address in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus='''3''',hostaddr='''7'''}}<br />
See [https://www.qemu.org/docs/master/system/devices/usb.html QEMU/USB emulation] for more information.<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|1=<br />
-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3<br />
}}<br />
<br />
See [https://www.spice-space.org/usbredir.html SPICE/usbredir] for more information.<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{Pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
==== Automatic USB forwarding with udev ====<br />
<br />
Normally, forwarded devices must be available at the boot time of the virtual machine to be forwarded. If that device is disconnected, it will not be forwarded anymore.<br />
<br />
You can use [[udev rule]]s to automatically attach a device when it comes online. Create a {{ic|hostdev}} entry somewhere on disk. [[chown]] it to root to prevent other users modifying it.<br />
<br />
{{hc|/usr/local/hostdev-mydevice.xml|2=<br />
<hostdev mode='subsystem' type='usb'><br />
<source><br />
<vendor id='0x03f0'/><br />
<product id='0x4217'/><br />
</source><br />
</hostdev><br />
}}<br />
<br />
Then create a ''udev'' rule which will attach/detach the device:<br />
<br />
{{hc|/usr/lib/udev/rules.d/90-libvirt-mydevice|2=<br />
ACTION=="add", \<br />
SUBSYSTEM=="usb", \<br />
ENV{ID_VENDOR_ID}=="03f0", \<br />
ENV{ID_MODEL_ID}=="4217", \<br />
RUN+="/usr/bin/virsh attach-device GUESTNAME /usr/local/hostdev-mydevice.xml"<br />
ACTION=="remove", \<br />
SUBSYSTEM=="usb", \<br />
ENV{ID_VENDOR_ID}=="03f0", \<br />
ENV{ID_MODEL_ID}=="4217", \<br />
RUN+="/usr/bin/virsh detach-device GUESTNAME /usr/local/hostdev-mydevice.xml"<br />
}}<br />
<br />
[https://rolandtapken.de/blog/2011-04/how-auto-hotplug-usb-devices-libvirt-vms-update-1 Source and further reading].<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#systemd-tmpfiles - temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar virtual machines are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://docs.kernel.org/admin-guide/mm/ksm.html for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory:<br />
<br />
$ grep -r . /sys/kernel/mm/ksm/<br />
<br />
}}<br />
<br />
=== Multi-monitor support ===<br />
<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Custom display resolution ===<br />
<br />
A custom display resolution can be set with {{ic|1=-device VGA,edid=on,xres=1280,yres=720}} (see [[wikipedia:Extended_Display_Identification_Data|EDID]] and [[wikipedia:Display_resolution|display resolution]]).<br />
<br />
=== Copy and paste ===<br />
<br />
==== SPICE ====<br />
<br />
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.<br />
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.<br />
<br />
==== qemu-vdagent ====<br />
<br />
QEMU provides its own implementation of the spice vdagent chardev called {{ic|qemu-vdagent}}. It interfaces with the spice-vdagent guest service and allows the guest and host share a clipboard.<br />
<br />
To access this shared clipboard with QEMU's GTK display, you will need to [[#Custom build|compile QEMU from source]] with the {{ic|--enable-gtk-clipboard}} configure parameter. It is sufficient to replace the installed {{ic|qemu-ui-gtk}} package.<br />
<br />
{{Note|<br />
* Feature request {{Bug|79716}} submitted to enable the functionality in the official package.<br />
* The shared clipboard in qemu-ui-gtk has been pushed back to experimental as it can [https://gitlab.com/qemu-project/qemu/-/issues/1150 freeze guests under certain circumstances]. A fix has been proposed to solve the issue upstream.<br />
}}<br />
<br />
Add the following QEMU command line arguments:<br />
<br />
-device virtio-serial,packed=on,ioeventfd=on<br />
-device virtserialport,name=com.redhat.spice.0,chardev=vdagent0<br />
-chardev qemu-vdagent,id=vdagent0,name=vdagent,clipboard=on,mouse=off<br />
<br />
These arguments are also valid if converted to [[Libvirt#QEMU command line arguments|libvirt form]].<br />
<br />
{{Note|While the spicevmc chardev will start the spice-vdagent service of the guest automatically, the qemu-vdagent chardev may not.}}<br />
<br />
On linux guests, you may [[start]] the {{ic|spice-vdagent.service}} [[user unit]] manually. On Windows guests, set the spice-agent startup type to automatic.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 11.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
<br />
{{Note|An administrator account is required to change power settings.}}<br />
<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest virtual machine. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -nic user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on a QEMU virtual machine. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
=== Chrooting into arm/arm64 environment from x86_64 ===<br />
<br />
Sometimes it is easier to work directly on a disk image instead of the real ARM based device. This can be achieved by mounting an SD card/storage containing the ''root'' partition and chrooting into it.<br />
<br />
Another use case for an ARM chroot is building ARM packages on an x86_64 machine. Here, the chroot environment can be created from an image tarball from [https://archlinuxarm.org Arch Linux ARM] - see [https://nerdstuff.org/posts/2020/2020-003_simplest_way_to_create_an_arm_chroot/] for a detailed description of this approach.<br />
<br />
Either way, from the chroot it should be possible to run ''pacman'' and install more packages, compile large libraries etc. Since the executables are for the ARM architecture, the translation to x86 needs to be performed by [[QEMU]].<br />
<br />
Install {{Pkg|qemu-user-static}} on the x86_64 machine/host, and {{Pkg|qemu-user-static-binfmt}} to register the qemu binaries to binfmt service.<br />
<br />
''qemu-user-static'' is used to allow the execution of compiled programs from other architectures. This is similar to what is provided by {{Pkg|qemu-emulators-full}}, but the "static" variant is required for chroot. Examples:<br />
<br />
qemu-arm-static path_to_sdcard/usr/bin/ls<br />
qemu-aarch64-static path_to_sdcard/usr/bin/ls<br />
<br />
These two lines execute the {{ic|ls}} command compiled for 32-bit ARM and 64-bit ARM respectively. Note that this will not work without chrooting, because it will look for libraries not present in the host system.<br />
<br />
{{Pkg|qemu-user-static}} allows automatically prefixing the ARM exectuable with {{ic|qemu-arm-static}} or {{ic|qemu-aarch64-static}}.<br />
<br />
Make sure that the ARM executable support is active:<br />
<br />
{{hc|$ ls /proc/sys/fs/binfmt_misc|<br />
qemu-aarch64 qemu-arm qemu-cris qemu-microblaze qemu-mipsel qemu-ppc64 qemu-riscv64 qemu-sh4 qemu-sparc qemu-sparc64 status<br />
qemu-alpha qemu-armeb qemu-m68k qemu-mips qemu-ppc qemu-ppc64abi32 qemu-s390x qemu-sh4eb qemu-sparc32plus register<br />
}}<br />
<br />
Each executable must be listed.<br />
<br />
If it is not active, [[restart]] {{ic|systemd-binfmt.service}}.<br />
<br />
Mount the SD card to {{ic|/mnt/sdcard}} (the device name may be different).<br />
<br />
# mount --mkdir /dev/mmcblk0p2 /mnt/sdcard<br />
<br />
Mount boot partition if needed (again, use the suitable device name):<br />
<br />
# mount /dev/mmcblk0p1 /mnt/sdcard/boot<br />
<br />
Finally ''chroot'' into the SD card root as described in [[Change root#Using chroot]]:<br />
<br />
# chroot /mnt/sdcard /bin/bash<br />
<br />
Alternatively, you can use ''arch-chroot'' from {{Pkg|arch-install-scripts}}, as it will provide an easier way to get network support:<br />
<br />
# arch-chroot /mnt/sdcard /bin/bash<br />
<br />
You can also use [[systemd-nspawn]] to chroot into the ARM environment:<br />
<br />
# systemd-nspawn -D /mnt/sdcard -M myARMMachine --bind-ro=/etc/resolv.conf<br />
<br />
{{ic|1=--bind-ro=/etc/resolv.conf}} is optional and gives a working network DNS inside the chroot<br />
<br />
==== sudo in chroot ====<br />
<br />
If you install [[sudo]] in the chroot and receive the following error when trying to use it:<br />
<br />
sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?<br />
<br />
then you may need to modify the binfmt flags, for example for {{ic|aarch64}}:<br />
<br />
# cp /usr/lib/binfmt.d/qemu-aarch64-static.conf /etc/binfmt.d/<br />
# vi /etc/binfmt.d/qemu-aarch64-static.conf<br />
<br />
and add a {{ic|C}} at the end of this file:<br />
<br />
:qemu-aarch64:M::\x7fELF\x02\x01\x01\x00\x00\x00\x00\x00\x00\x00\x00\x00\x02\x00\xb7\x00:\xff\xff\xff\xff\xff\xff\xff\x00\xff\xff\xff\xff\xff\xff\xff\xff\xfe\xff\xff\xff:/usr/bin/qemu-aarch64-static:FPC<br />
<br />
Then [[restart]] {{ic|systemd-binfmt.service}} and check that the changes have taken effect (note the {{ic|C}} on the {{ic|flags}} line):<br />
<br />
{{hc|# cat /proc/sys/fs/binfmt_misc/qemu-aarch64|<br />
enabled<br />
interpreter /usr/bin/qemu-aarch64-static<br />
flags: POCF<br />
offset 0<br />
magic 7f454c460201010000000000000000000200b700<br />
mask ffffffffffffff00fffffffffffffffffeffffff<br />
}}<br />
<br />
See the "flags" section of the [https://docs.kernel.org/admin-guide/binfmt-misc.html kernel binfmt documentation] for more information.<br />
<br />
=== Not grabbing mouse input ===<br />
<br />
{{Style|It is not explained what the option actually does. Is it causing or avoiding the side effect?}}<br />
<br />
Tablet mode has side effect of not grabbing mouse input in QEMU window:<br />
<br />
-usb -device usb-tablet<br />
<br />
It works with several {{ic|-vga}} backends one of which is virtio.<br />
<br />
== Troubleshooting ==<br />
<br />
{{Merge|QEMU/Troubleshooting|This section is long enough to be split into a dedicated subpage.}}<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|1=-display default,show-cursor=on}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
Another option to try is {{ic|-usb -device usb-tablet}} as mentioned in [[#Mouse integration]]. This overrides the default PS/2 mouse emulation and synchronizes pointer location between host and guest as an added bonus.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [https://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps/}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Could not read keymap file ===<br />
<br />
qemu-system-x86_64: -display vnc=0.0.0.0:0: could not read keymap file: 'en'<br />
<br />
is caused by an invalid ''keymap'' passed to the {{ic|-k}} argument. For example, {{ic|en}} is invalid, but {{ic|en-us}} is valid - see {{ic|/usr/share/qemu/keymaps/}}.<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{Pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments ===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the virtual machine with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assigns a dynamic ID for the {{ic|kvm}} group (see {{Bug|54943}}). To avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line with {{ic|1=group = "78"}} to {{ic|1=group = "kvm"}}.<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows virtual machine ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the virtual machine may crash unexpectedly, whereas they would run normally on a physical machine. If, while running {{ic|dmesg -wH}} as root, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the virtual machine or other virtual machine.}}<br />
<br />
=== Applications in the virtual machine experience long delays or take a long time to start ===<br />
<br />
{{Out of date|No longer true since kernel 5.6}}<br />
<br />
This may be caused by insufficient available entropy in the virtual machine. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the virtual machine, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
=== High interrupt latency and microstuttering ===<br />
<br />
This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.<br />
<br />
* One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores. <br />
* Another possible cause is PS/2 inputs. Switch from PS/2 to Virtio inputs, see [[PCI passthrough via OVMF#Passing keyboard/mouse via Evdev]].<br />
<br />
=== QXL video causes low resolution ===<br />
<br />
QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [https://bugs.launchpad.net/qemu/+bug/1843151] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.<br />
<br />
As a workaround, create your device in this form:<br />
<br />
-device qxl-vga,max_outputs=1...<br />
<br />
=== Virtual machine not booting when using a Secure Boot enabled OVMF ===<br />
<br />
{{ic|OVMF_CODE.secboot.4m.fd}} and {{ic|OVMF_CODE.secboot.fd}} files from {{Pkg|edk2-ovmf}} are built with [[Wikipedia:System Management Mode|SMM]] support. If S3 support is not disabled in the virtual machine, then the virtual machine might not boot at all.<br />
<br />
Add the {{ic|1=-global ICH9-LPC.disable_s3=1}} option to the ''qemu'' command.<br />
<br />
See {{Bug|59465}} and https://github.com/tianocore/edk2/blob/master/OvmfPkg/README for more details and the required options to use Secure Boot in QEMU.<br />
<br />
=== Virtual machine not booting into Arch ISO ===<br />
<br />
When trying to boot the virtual machine for the first time from an Arch ISO image, the boot process hangs. Adding {{ic|1=console=ttyS0}} to kernel boot options by pressing {{ic|e}} in the boot menu you will get more boot messages and the following error:<br />
<br />
:: Mounting '/dev/disk/by-label/ARCH_202204' to '/run/archiso/bootmnt'<br />
Waiting 30 seconds for device /dev/disk/by-label/ARCH_202204 ...<br />
ERROR: '/dev/disk/by-label/ARCH_202204' device did not show up after 30 seconds...<br />
Falling back to interactive prompt<br />
You can try to fix the problem manually, log out when you are finished<br />
sh: can't access tty; job control turned off<br />
<br />
The error message does not give a good clue as to what the real issue is. The problem is with the default 128MB of RAM that QEMU allocates to the virtual machine. Increasing the limit to 1024MB with {{ic|-m 1024}} solves the issue and lets the system boot. You can continue installing Arch Linux as usual after that. Once the installation is complete, the memory allocation for the virtual machine can be decreased. The need for 1024MB is due to RAM disk requirements and size of the installation media. See [https://lists.archlinux.org/archives/list/arch-releng@lists.archlinux.org/message/D5HSGOFTPGYI6IZUEB3ZNAX4D3F3ID37/ this message on the arch-releng mailing list] and [https://bbs.archlinux.org/viewtopic.php?id=204023 this forum thread].<br />
<br />
=== Guest CPU interrupts are not firing ===<br />
<br />
If you are writing your own operating system by following the [https://wiki.osdev.org/ OSDev wiki], or are simply getting stepping through the guest architecture assembly code using QEMU's {{ic|gdb}} interface using the {{ic|-s}} flag, it is useful to know that many emulators, QEMU included, usually implement some CPU interrupts leaving many hardware interrupts unimplemented. One way to know if your code is firing an interrupt, is by using:<br />
<br />
-d int<br />
<br />
to enable showing interrupts/exceptions on stdout.<br />
<br />
To see what other guest debugging features QEMU has to offer, see:<br />
<br />
qemu-system-x86_64 -d help<br />
<br />
or replace {{ic|x86_64}} for your chosen guest architecture.<br />
<br />
=== KDE with sddm does not start spice-vdagent at login automatically ===<br />
<br />
Remove or comment out {{ic|X-GNOME-Autostart-Phase{{=}}WindowManager}} from {{ic|/etc/xdg/autostart/spice-vdagent.desktop}}. [https://github.com/systemd/systemd/issues/18791]<br />
<br />
=== Error starting domain: Requested operation is not valid: network 'default' is not active ===<br />
<br />
If for any reason the default network is deactivated, you will not be able to start any guest virtual machines which are configured to use the network. Your first attempt can be simply trying to start the network with virsh.<br />
<br />
# virsh net-start default<br />
<br />
For additional troubleshooting steps, see [https://www.xmodulo.com/network-default-is-not-active.html].<br />
<br />
== See also ==<br />
<br />
* [https://qemu.org Official QEMU website]<br />
* [https://www.linux-kvm.org Official KVM website]<br />
* [https://qemu.weilnetz.de/doc/6.0/ QEMU Emulator User Documentation]<br />
* [[Wikibooks:QEMU|QEMU Wikibook]]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [https://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [https://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]{{Dead link|2022|09|22|status=404}}<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]{{Dead link|2022|09|22|status=404}}<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/part-virt-qemu.html Managing Virtual Machines with QEMU - openSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Recolichttps://wiki.archlinux.org/index.php?title=QEMU&diff=775481QEMU2023-04-13T21:03:02Z<p>Recolic: /* Trusted Platform Module emulation */ Add a hint: `-cpu host` and hyper-v related options might cause TPM device problem.</p>
<hr />
<div>[[Category:Emulation]]<br />
[[Category:Hypervisors]]<br />
[[de:QEMU]]<br />
[[es:QEMU]]<br />
[[fr:QEMU]]<br />
[[ja:QEMU]]<br />
[[zh-hans:QEMU]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|Libvirt}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
According to the [https://wiki.qemu.org/Main_Page QEMU about page], "QEMU is a generic and open source machine emulator and virtualizer."<br />
<br />
When used as a machine emulator, QEMU can run OSes and programs made for one machine (e.g. an ARM board) on a different machine (e.g. your x86 PC). By using dynamic translation, it achieves very good performance.<br />
<br />
QEMU can use other hypervisors like [[Xen]] or [[KVM]] to use CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization. When used as a virtualizer, QEMU achieves near native performances by executing the guest code directly on the host CPU.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|qemu-full}} package (or {{Pkg|qemu-base}} for the version without GUI) and below optional packages for your needs:<br />
<br />
* {{Pkg|qemu-block-gluster}} - [[Glusterfs]] block support<br />
* {{Pkg|qemu-block-iscsi}} - [[iSCSI]] block support<br />
* {{Pkg|samba}} - [[Samba|SMB/CIFS]] server support<br />
<br />
Alternatively, {{Pkg|qemu-user-static}} exists as a usermode and static variant.<br />
<br />
=== QEMU variants ===<br />
<br />
QEMU is offered in several variants suited for different use cases.<br />
<br />
As a first classification, QEMU is offered in full-system and usermode emulation modes:<br />
<br />
; Full-system emulation<br />
: In this mode, QEMU emulates a full system, including one or several processors and various peripherals. It is more accurate but slower, and does not require the emulated OS to be Linux.<br />
: QEMU commands for full-system emulation are named {{ic|qemu-system-''target_architecture''}}, e.g. {{ic|qemu-system-x86_64}} for emulating [[Wikipedia:x86_64|x86_64]] CPUs, {{ic|qemu-system-i386}} for Intel [[Wikipedia:i386|32-bit x86]] CPUs, {{ic|qemu-system-arm}} for [[Wikipedia:ARM architecture family#32-bit architecture|ARM (32 bits)]], {{ic|qemu-system-aarch64}} for [[Wikipedia:AArch64|ARM64]], etc.<br />
: If the target architecture matches the host CPU, this mode may still benefit from a significant speedup by using a hypervisor like [[#Enabling KVM|KVM]] or Xen.<br />
; [https://www.qemu.org/docs/master/user/main.html Usermode emulation]: In this mode, QEMU is able to invoke a Linux executable compiled for a (potentially) different architecture by leveraging the host system resources. There may be compatibility issues, e.g. some features may not be implemented, dynamically linked executables will not work out of the box (see [[#Chrooting into arm/arm64 environment from x86_64]] to address this) and only Linux is supported (although [https://wiki.winehq.org/Emulation Wine may be used] for running Windows executables).<br />
: QEMU commands for usermode emulation are named {{ic|qemu-''target_architecture''}}, e.g. {{ic|qemu-x86_64}} for emulating 64-bit CPUs.<br />
<br />
QEMU is offered in dynamically-linked and statically-linked variants:<br />
<br />
; Dynamically-linked (default): {{ic|qemu-*}} commands depend on the host OS libraries, so executables are smaller.<br />
; Statically-linked: {{ic|qemu-*}} commands can be copied to any Linux system with the same architecture.<br />
<br />
In the case of Arch Linux, full-system emulation is offered as:<br />
<br />
; Non-headless (default): This variant enables GUI features that require additional dependencies (like SDL or GTK).<br />
; Headless: This is a slimmer variant that does not require GUI (this is suitable e.g. for servers).<br />
<br />
Note that headless and non-headless versions install commands with the same name (e.g. {{ic|qemu-system-x86_64}}) and thus cannot be both installed at the same time.<br />
<br />
=== Details on packages available in Arch Linux ===<br />
<br />
* The {{Pkg|qemu-desktop}} package provides the {{ic|x86_64}} architecture emulators for full-system emulation ({{ic|qemu-system-x86_64}}). The {{Pkg|qemu-emulators-full}} package provides the {{ic|x86_64}} usermode variant ({{ic|qemu-x86_64}}) and also for the rest of supported architectures it includes both full-system and usermode variants (e.g. {{ic|qemu-system-arm}} and {{ic|qemu-arm}}).<br />
* The headless versions of these packages (only applicable to full-system emulation) are {{Pkg|qemu-base}} ({{ic|x86_64}}-only) and {{Pkg|qemu-emulators-full}} (rest of architectures).<br />
* Full-system emulation can be expanded with some QEMU modules present in separate packages: {{Pkg|qemu-block-gluster}}, {{Pkg|qemu-block-iscsi}} and {{Pkg|qemu-guest-agent}}.<br />
* {{Pkg|qemu-user-static}} provides a usermode and static variant for all target architectures supported by QEMU. The installed QEMU commands are named {{ic|qemu-''target_architecture''-static}}, for example, {{ic|qemu-x86_64-static}} for intel 64-bit CPUs.<br />
<br />
{{Note|At present, Arch does not offer a full-system mode and statically linked variant (neither officially nor via AUR), as this is usually not needed.}}<br />
<br />
== Graphical front-ends for QEMU ==<br />
<br />
Unlike other virtualization programs such as [[VirtualBox]] and [[VMware]], QEMU does not provide a GUI to manage virtual machines (other than the window that appears when running a virtual machine), nor does it provide a way to create persistent virtual machines with saved settings. All parameters to run a virtual machine must be specified on the command line at every launch, unless you have created a custom script to start your virtual machine(s).<br />
<br />
[[Libvirt]] provides a convenient way to manage QEMU virtual machines. See [[Libvirt#Client|list of libvirt clients]] for available front-ends.<br />
<br />
Other GUI front-ends for QEMU:<br />
<br />
* {{App|AQEMU|QEMU GUI written in Qt5.|https://github.com/tobimensch/aqemu|{{AUR|aqemu}}}}<br />
<br />
== Creating new virtualized system ==<br />
<br />
=== Creating a hard disk image ===<br />
<br />
{{Accuracy|If I get the man page right the raw format only allocates the full size if the filesystem does not support "holes" or it is <br />
explicitly told to preallocate. See {{man|1|qemu-img|NOTES}}.}} <br />
<br />
{{Tip|See [[Wikibooks:QEMU/Images]] for more information on QEMU images.}}<br />
<br />
To run QEMU you will need a hard disk image, unless you are booting a live system from CD-ROM or the network (and not doing so to install an operating system to a hard disk image). A hard disk image is a file which stores the contents of the emulated hard disk.<br />
<br />
A hard disk image can be ''raw'', so that it is literally byte-by-byte the same as what the guest sees, and will always use the full capacity of the guest hard drive on the host. This method provides the least I/O overhead, but can waste a lot of space, as not-used space on the guest cannot be used on the host.<br />
<br />
Alternatively, the hard disk image can be in a format such as ''qcow2'' which only allocates space to the image file when the guest operating system actually writes to those sectors on its virtual hard disk. The image appears as the full size to the guest operating system, even though it may take up only a very small amount of space on the host system. This image format also supports QEMU snapshotting functionality (see [[#Creating and managing snapshots via the monitor console]] for details). However, using this format instead of ''raw'' will likely affect performance.<br />
<br />
QEMU provides the {{ic|qemu-img}} command to create hard disk images. For example to create a 4 GiB image in the ''raw'' format:<br />
<br />
$ qemu-img create -f raw ''image_file'' 4G<br />
<br />
You may use {{ic|-f qcow2}} to create a ''qcow2'' disk instead.<br />
<br />
{{Note|You can also simply create a ''raw'' image by creating a file of the needed size using {{ic|dd}} or {{ic|fallocate}}.}}<br />
<br />
{{Warning|If you store the hard disk images on a [[Btrfs]] file system, you should consider disabling [[Btrfs#Copy-on-Write (CoW)|Copy-on-Write]] for the directory before creating any images.}}<br />
<br />
==== Overlay storage images ====<br />
<br />
You can create a storage image once (the 'backing' image) and have QEMU keep mutations to this image in an overlay image. This allows you to revert to a previous state of this storage image. You could revert by creating a new overlay image at the time you wish to revert, based on the original backing image.<br />
<br />
To create an overlay image, issue a command like:<br />
<br />
$ qemu-img create -o backing_file=''img1.raw'',backing_fmt=''raw'' -f ''qcow2'' ''img1.cow''<br />
<br />
After that you can run your QEMU VM as usual (see [[#Running virtualized system]]):<br />
<br />
$ qemu-system-x86_64 ''img1.cow''<br />
<br />
The backing image will then be left intact and mutations to this storage will be recorded in the overlay image file.<br />
<br />
When the path to the backing image changes, repair is required.<br />
<br />
{{Warning|The backing image's absolute filesystem path is stored in the (binary) overlay image file. Changing the backing image's path requires some effort.}}<br />
<br />
Make sure that the original backing image's path still leads to this image. If necessary, make a symbolic link at the original path to the new path. Then issue a command like:<br />
<br />
$ qemu-img rebase -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
At your discretion, you may alternatively perform an 'unsafe' rebase where the old path to the backing image is not checked:<br />
<br />
$ qemu-img rebase -u -b ''/new/img1.raw'' ''/new/img1.cow''<br />
<br />
==== Resizing an image ====<br />
<br />
{{Warning|Resizing an image containing an NTFS boot file system could make the operating system installed on it unbootable. It is recommended to create a backup first.}}<br />
<br />
The {{ic|qemu-img}} executable has the {{ic|resize}} option, which enables easy resizing of a hard drive image. It works for both ''raw'' and ''qcow2''. For example, to increase image space by 10 GiB, run:<br />
<br />
$ qemu-img resize ''disk_image'' +10G<br />
<br />
After enlarging the disk image, you must use file system and partitioning tools inside the virtual machine to actually begin using the new space. When shrinking a disk image, you must '''first reduce the allocated file systems and partition sizes''' using the file system and partitioning tools inside the virtual machine and then shrink the disk image accordingly, otherwise shrinking the disk image will result in data loss! For a Windows guest, open the "create and format hard disk partitions" control panel.<br />
<br />
==== Converting an image ====<br />
<br />
You can convert an image to other formats using {{ic|qemu-img convert}}. This example shows how to convert a ''raw'' image to ''qcow2'':<br />
<br />
$ qemu-img convert -f raw -O qcow2 ''input''.img ''output''.qcow2<br />
<br />
This will not remove the original input file.<br />
<br />
=== Preparing the installation media ===<br />
<br />
To install an operating system into your disk image, you need the installation medium (e.g. optical disk, USB-drive, or ISO image) for the operating system. The installation medium should not be mounted because QEMU accesses the media directly.<br />
<br />
{{Tip|If using an optical disk, it is a good idea to first dump the media to a file because this both improves performance and does not require you to have direct access to the devices (that is, you can run QEMU as a regular user without having to change access permissions on the media's device file). For example, if the CD-ROM device node is named {{ic|/dev/cdrom}}, you can dump it to a file with the command: {{bc|1=$ dd if=/dev/cdrom of=''cd_image.iso'' bs=4k}}}}<br />
<br />
=== Installing the operating system ===<br />
<br />
This is the first time you will need to start the emulator. To install the operating system on the disk image, you must attach both the disk image and the installation media to the virtual machine, and have it boot from the installation media.<br />
<br />
For example on i386 guests, to install from a bootable ISO file as CD-ROM and a raw disk image:<br />
<br />
$ qemu-system-x86_64 -cdrom ''iso_image'' -boot order=d -drive file=''disk_image'',format=raw<br />
<br />
See {{man|1|qemu}} for more information about loading other media types (such as floppy, disk images or physical drives) and [[#Running virtualized system]] for other useful options.<br />
<br />
After the operating system has finished installing, the QEMU image can be booted directly (see [[#Running virtualized system]]).<br />
<br />
{{Note|By default only 128 MiB of memory is assigned to the machine. The amount of memory can be adjusted with the {{ic|-m}} switch, for example {{ic|-m 512M}} or {{ic|-m 2G}}.}}<br />
<br />
{{Tip|<br />
* Instead of specifying {{ic|1=-boot order=x}}, some users may feel more comfortable using a boot menu: {{ic|1=-boot menu=on}}, at least during configuration and experimentation.<br />
* When running QEMU in headless mode, it starts a local VNC server on port 5900 per default. You can use [[TigerVNC]] to connect to the guest OS: {{ic|vncviewer :5900}}<br />
* If you need to replace floppies or CDs as part of the installation process, you can use the QEMU machine monitor (press {{ic|Ctrl+Alt+2}} in the virtual machine's window) to remove and attach storage devices to a virtual machine. Type {{ic|info block}} to see the block devices, and use the {{ic|change}} command to swap out a device. Press {{ic|Ctrl+Alt+1}} to go back to the virtual machine.<br />
}}<br />
<br />
== Running virtualized system ==<br />
<br />
{{ic|qemu-system-*}} binaries (for example {{ic|qemu-system-i386}} or {{ic|qemu-system-x86_64}}, depending on guest's architecture) are used to run the virtualized guest. The usage is:<br />
<br />
$ qemu-system-x86_64 ''options'' ''disk_image''<br />
<br />
Options are the same for all {{ic|qemu-system-*}} binaries, see {{man|1|qemu}} for documentation of all options.<br />
<br />
By default, QEMU will show the virtual machine's video output in a window. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it, press {{ic|Ctrl+Alt+g}}.<br />
<br />
{{Warning|QEMU should never be run as root. If you must launch it in a script as root, you should use the {{ic|-runas}} option to make QEMU drop root privileges.}}<br />
<br />
=== Enabling KVM ===<br />
<br />
KVM (''Kernel-based Virtual Machine'') full virtualization must be supported by your Linux kernel and your hardware, and necessary [[kernel modules]] must be loaded. See [[KVM]] for more information.<br />
<br />
To start QEMU in KVM mode, append {{ic|-accel kvm}} to the additional start options. To check if KVM is enabled for a running VM, enter the [[#QEMU monitor]] and type {{ic|info kvm}}.<br />
<br />
{{Note|<br />
* The argument {{ic|1=accel=kvm}} of the {{ic|-machine}} option is equivalent to the {{ic|-enable-kvm}} or the {{ic|-accel kvm}} option.<br />
* CPU model {{ic|host}} requires KVM.<br />
* If you start your VM with a GUI tool and experience very bad performance, you should check for proper KVM support, as QEMU may be falling back to software emulation.<br />
* KVM needs to be enabled in order to start Windows 7 or Windows 8 properly without a ''blue screen''.<br />
}}<br />
<br />
=== Enabling IOMMU (Intel VT-d/AMD-Vi) support ===<br />
<br />
First enable IOMMU, see [[PCI passthrough via OVMF#Setting up IOMMU]].<br />
<br />
Add {{ic|-device intel-iommu}} to create the IOMMU device:<br />
<br />
$ qemu-system-x86_64 '''-enable-kvm -machine q35 -device intel-iommu''' -cpu host ..<br />
<br />
{{Note|<br />
On Intel CPU based systems creating an IOMMU device in a QEMU guest with {{ic|-device intel-iommu}} will disable PCI passthrough with an error like: {{bc|Device at bus pcie.0 addr 09.0 requires iommu notifier which is currently not supported by intel-iommu emulation}} While adding the kernel parameter {{ic|1=intel_iommu=on}} is still needed for remapping IO (e.g. [[PCI passthrough via OVMF#Isolating the GPU|PCI passthrough with vfio-pci]]), {{ic|-device intel-iommu}} should not be set if PCI passthrough is required.<br />
}}<br />
<br />
=== Booting in UEFI mode ===<br />
<br />
The default firmware used by QEMU is [https://www.coreboot.org/SeaBIOS SeaBIOS], which is a Legacy BIOS implementation. QEMU uses {{ic|/usr/share/qemu/bios-256k.bin}} (provided by the {{Pkg|seabios}} package) as a default read-only (ROM) image. You can use the {{ic|-bios}} argument to select another firmware file. However, UEFI requires writable memory to work properly, so you need to emulate [https://wiki.qemu.org/Features/PC_System_Flash PC System Flash] instead.<br />
<br />
[https://github.com/tianocore/tianocore.github.io/wiki/OVMF OVMF] is a TianoCore project to enable UEFI support for Virtual Machines. It can be [[install]]ed with the {{Pkg|edk2-ovmf}} package.<br />
<br />
There are two ways to use OVMF as a firmware. The first is to copy {{ic|/usr/share/edk2-ovmf/x64/OVMF.fd}}, make it writable and use as a pflash drive:<br />
<br />
-drive if=pflash,format=raw,file=''/copy/of/OVMF.fd''<br />
<br />
All changes to the UEFI settings will be saved directly to this file.<br />
<br />
Another and more preferable way is to split OVMF into two files. The first one will be read-only and store the firmware executable, and the second one will be used as a writable variable store. The advantage is that you can use the firmware file directly without copying, so it will be updated automatically by [[pacman]].<br />
<br />
Use {{ic|/usr/share/edk2-ovmf/x64/OVMF_CODE.fd}} as a first read-only pflash drive. Copy {{ic|/usr/share/edk2-ovmf/x64/OVMF_VARS.fd}}, make it writable and use as a second writable pflash drive:<br />
<br />
-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd \<br />
-drive if=pflash,format=raw,file=''/copy/of/OVMF_VARS.fd''<br />
<br />
=== Trusted Platform Module emulation ===<br />
<br />
QEMU can emulate [[Trusted Platform Module]], which is required by some systems such as Windows 11.<br />
<br />
[[Install]] the {{Pkg|swtpm}} package, which provides a software TPM implementation. Create some directory for storing TPM data ({{ic|''/path/to/mytpm''}} will be used as an example). Run this command to start the emulator:<br />
<br />
$ swtpm socket --tpm2 --tpmstate dir=''/path/to/mytpm'' --ctrl type=unixio,path=''/path/to/mytpm/swtpm-sock''<br />
<br />
{{ic|''/path/to/mytpm/swtpm-sock''}} will be created by ''swtpm'': this is a UNIX socket to which QEMU will connect. You can put it in any directory.<br />
<br />
By default, ''swtpm'' starts a TPM version 1.2 emulator. The {{ic|--tpm2}} option enables TPM 2.0 emulation.<br />
<br />
Finally, add the following options to QEMU:<br />
<br />
-chardev socket,id=chrtpm,path=''/path/to/mytpm/swtpm-sock'' \<br />
-tpmdev emulator,id=tpm0,chardev=chrtpm \<br />
-device tpm-tis,tpmdev=tpm0<br />
<br />
and TPM will be available inside the VM. After shutting down the VM, ''swtpm'' will be automatically terminated.<br />
<br />
See [https://qemu-project.gitlab.io/qemu/specs/tpm.html the QEMU documentation] for more information. <br />
<br />
If guest OS still doesn't recognize the TPM device, try to adjust ''CPU Models and Topology'' options. It might cause problem.<br />
<br />
== Sharing data between host and guest ==<br />
<br />
=== Network ===<br />
<br />
Data can be shared between the host and guest OS using any network protocol that can transfer files, such as [[NFS]], [[SMB]], [[Wikipedia:Network block device|NBD]], HTTP, [[Very Secure FTP Daemon|FTP]], or [[SSH]], provided that you have set up the network appropriately and enabled the appropriate services.<br />
<br />
The default user-mode networking allows the guest to access the host OS at the IP address 10.0.2.2. Any servers that you are running on your host OS, such as a SSH server or SMB server, will be accessible at this IP address. So on the guests, you can mount directories exported on the host via [[SMB]] or [[NFS]], or you can access the host's HTTP server, etc.<br />
It will not be possible for the host OS to access servers running on the guest OS, but this can be done with other network configurations (see [[#Tap networking with QEMU]]).<br />
<br />
=== QEMU's port forwarding ===<br />
<br />
{{Note|QEMU's port forwarding is IPv4-only. IPv6 port forwarding is not implemented and the last patches were proposed in 2018.[https://lore.kernel.org/qemu-devel/1540512223-21199-1-git-send-email-max7255@yandex-team.ru/T/#u]}}<br />
<br />
QEMU can forward ports from the host to the guest to enable e.g. connecting from the host to an SSH server running on the guest.<br />
<br />
For example, to bind port 60022 on the host with port 22 (SSH) on the guest, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22<br />
<br />
Make sure the sshd is running on the guest and connect with:<br />
<br />
$ ssh ''guest-user''@127.0.0.1 -p 60022<br />
<br />
You can use [[SSHFS]] to mount the guest's file system at the host for shared read and write access.<br />
<br />
To forward several ports, you just repeat the {{ic|hostfwd}} in the {{ic|-nic}} argument, e.g. for VNC's port:<br />
<br />
$ qemu-system-x86_64 ''disk_image'' -nic user,hostfwd=tcp::60022-:22,hostfwd=tcp::5900-:5900<br />
<br />
=== QEMU's built-in SMB server ===<br />
<br />
QEMU's documentation says it has a "built-in" SMB server, but actually it just starts up [[Samba]] on the host with an automatically generated {{ic|smb.conf}} file located in {{ic|/tmp/qemu-smb.''random_string''}} and makes it accessible to the guest at a different IP address (10.0.2.4 by default). This only works for user networking, and is useful when you do not want to start the normal [[Samba]] service on the host, which the guest can also access if you have set up shares on it.<br />
<br />
Only a single directory can be set as shared with the option {{ic|1=smb=}}, but adding more directories (even while the virtual machine is running) could be as easy as creating symbolic links in the shared directory if QEMU configured SMB to follow symbolic links. It does not do so, but the configuration of the running SMB server can be changed as described below.<br />
<br />
''Samba'' must be installed on the host. To enable this feature, start QEMU with a command like:<br />
<br />
$ qemu-system-x86_64 -nic user,id=nic0,smb=''shared_dir_path'' ''disk_image''<br />
<br />
where {{ic|''shared_dir_path''}} is a directory that you want to share between the guest and host.<br />
<br />
Then, in the guest, you will be able to access the shared directory on the host 10.0.2.4 with the share name "qemu". For example, in Windows Explorer you would go to {{ic|\\10.0.2.4\qemu}}.<br />
<br />
{{Note|<br />
* If you are using sharing options multiple times like {{ic|1=-net user,smb=''shared_dir_path1'' -net user,smb=''shared_dir_path2''}} or {{ic|1=-net user,smb=''shared_dir_path1'',smb=''shared_dir_path2''}} then it will share only the last defined one.<br />
* If you cannot access the shared folder and the guest system is Windows, check that the [http://ecross.mvps.org/howto/enable-netbios-over-tcp-ip-with-windows.htm NetBIOS protocol is enabled] and that a firewall does not block [https://technet.microsoft.com/en-us/library/cc940063.aspx ports] used by the NetBIOS protocol.<br />
* If you cannot access the shared folder and the guest system is Windows 10 Enterprise or Education or Windows Server 2016, [https://support.microsoft.com/en-us/help/4046019 enable guest access].<br />
* If you use [[#Tap networking with QEMU]], use {{ic|1=-device virtio-net,netdev=vmnic -netdev user,id=vmnic,smb=''shared_dir_path''}} to get SMB.<br />
}}<br />
<br />
One way to share multiple directories and to add or remove them while the virtual machine is running, is to share an empty directory and create/remove symbolic links to the directories in the shared directory. For this to work, the configuration of the running SMB server can be changed with the following script, which also allows the execution of files on the guest that are not set executable on the host:<br />
<br />
#!/bin/sh<br />
eval $(ps h -C smbd -o pid,args | grep /tmp/qemu-smb | gawk '{print "pid="$1";conf="$6}')<br />
echo "[global]<br />
allow insecure wide links = yes<br />
[qemu]<br />
follow symlinks = yes<br />
wide links = yes<br />
acl allow execute always = yes" >> "$conf"<br />
# in case the change is not detected automatically:<br />
smbcontrol --configfile="$conf" "$pid" reload-config<br />
<br />
This can be applied to the running server started by qemu only after the guest has connected to the network drive the first time. An alternative to this method is to add additional shares to the configuration file like so:<br />
<br />
echo "[''myshare'']<br />
path=''another_path''<br />
read only=no<br />
guest ok=yes<br />
force user=''username''" >> $conf<br />
<br />
This share will be available on the guest as {{ic|\\10.0.2.4\''myshare''}}.<br />
<br />
=== Using filesystem passthrough and VirtFS ===<br />
<br />
See the [https://wiki.qemu.org/Documentation/9psetup QEMU documentation].<br />
<br />
=== Host file sharing with virtiofsd ===<br />
<br />
{{Style|See [[Help:Style/Formatting and punctuation]].}}<br />
<br />
virtiofsd is shipped with QEMU package. Documentation is available [https://qemu-stsquad.readthedocs.io/en/docs-next/tools/virtiofsd.html online] or {{ic|/usr/share/doc/qemu/qemu/tools/virtiofsd.html}} on local file system with {{Pkg|qemu-docs}} installed.<br />
<br />
Add user that runs qemu to 'kvm' group, because it needs to access virtiofsd socket. You might have to logout for change to take effect.<br />
<br />
{{Accuracy|Running services as root is not secure. Also the process should be wrapped in a systemd service.}}<br />
<br />
Start as virtiofsd as root:<br />
<br />
# /usr/lib/qemu/virtiofsd --socket-path=/var/run/qemu-vm-001.sock -o source=/tmp/vm-001 -o cache=always<br />
<br />
where<br />
<br />
* {{ic|/var/run/qemu-vm-001.sock}} is a socket file,<br />
* {{ic|/tmp/vm-001}} is a shared directory between host and guest vm.<br />
<br />
The created socket file has root only access permission. Give group kvm access to it with:<br />
<br />
# chgrp kvm qemu-vm-001.sock; chmod g+rxw qemu-vm-001.sock<br />
<br />
Add the following configuration options when starting VM:<br />
<br />
-object memory-backend-memfd,id=mem,size=4G,share=on \<br />
-numa node,memdev=mem \<br />
-chardev socket,id=char0,path=/var/run/qemu-vm-001.sock \<br />
-device vhost-user-fs-pci,chardev=char0,tag=myfs<br />
<br />
where<br />
<br />
{{Expansion|Explain the remaining options (or remove them if they are not necessary).}}<br />
<br />
* {{ic|1=size=4G}} shall match size specified with {{ic|-m 4G}} option,<br />
* {{ic|/var/run/qemu-vm-001.sock}} points to socket file started earlier,<br />
<br />
{{Style|The section should not be specific to Windows.}}<br />
<br />
Remember, that guest must be configured to enable sharing. For windows there are [https://virtio-fs.gitlab.io/howto-windows.html instructions]. Once configured, windows will have Z: drive mapped automatically with shared directory content. <br />
<br />
Your Windows 10 guest system is properly configured if it has:<br />
<br />
* VirtioFSSService windows service,<br />
* WinFsp.Launcher windows service,<br />
* VirtIO FS Device driver under "System devices" in Windows "Device Manager".<br />
<br />
If the above installed and {{ic|Z:}} drive is still not listed, try repairing "Virtio-win-guest-tools" in Windows add/remove programs.<br />
<br />
=== Mounting a partition of the guest on the host ===<br />
<br />
It can be useful to mount a drive image under the host system, it can be a way to transfer files in and out of the guest. This should be done when the virtual machine is not running.<br />
<br />
The procedure to mount the drive on the host depends on the type of qemu image, ''raw'' or ''qcow2''. We detail thereafter the steps to mount a drive in the two formats in [[#Mounting a partition from a raw image]] and [[#Mounting a partition from a qcow2 image]]. For the full documentation see [[Wikibooks:QEMU/Images#Mounting an image on the host]].<br />
<br />
{{Warning|You must unmount the partitions before running the virtual machine again. Otherwise, data corruption is very likely to occur.}}<br />
<br />
==== Mounting a partition from a raw image ====<br />
<br />
It is possible to mount partitions that are inside a raw disk image file by setting them up as loopback devices.<br />
<br />
===== With manually specifying byte offset =====<br />
<br />
One way to mount a disk image partition is to mount the disk image at a certain offset using a command like the following:<br />
<br />
# mount -o loop,offset=32256 ''disk_image'' ''mountpoint''<br />
<br />
The {{ic|1=offset=32256}} option is actually passed to the {{ic|losetup}} program to set up a loopback device that starts at byte offset 32256 of the file and continues to the end. This loopback device is then mounted. You may also use the {{ic|sizelimit}} option to specify the exact size of the partition, but this is usually unnecessary.<br />
<br />
Depending on your disk image, the needed partition may not start at offset 32256. Run {{ic|fdisk -l ''disk_image''}} to see the partitions in the image. fdisk gives the start and end offsets in 512-byte sectors, so multiply by 512 to get the correct offset to pass to {{ic|mount}}.<br />
<br />
===== With loop module autodetecting partitions =====<br />
<br />
The Linux loop driver actually supports partitions in loopback devices, but it is disabled by default. To enable it, do the following:<br />
<br />
* Get rid of all your loopback devices (unmount all mounted images, etc.).<br />
* [[Kernel modules#Manual module handling|Unload]] the {{ic|loop}} kernel module, and load it with the {{ic|1=max_part=15}} parameter set. Additionally, the maximum number of loop devices can be controlled with the {{ic|max_loop}} parameter.<br />
<br />
{{Tip|You can put an entry in {{ic|/etc/modprobe.d}} to load the loop module with {{ic|1=max_part=15}} every time, or you can put {{ic|1=loop.max_part=15}} on the kernel command-line, depending on whether you have the {{ic|loop.ko}} module built into your kernel or not.}}<br />
<br />
Set up your image as a loopback device:<br />
<br />
# losetup -f -P ''disk_image''<br />
<br />
Then, if the device created was {{ic|/dev/loop0}}, additional devices {{ic|/dev/loop0pX}} will have been automatically created, where X is the number of the partition. These partition loopback devices can be mounted directly. For example:<br />
<br />
# mount /dev/loop0p1 ''mountpoint''<br />
<br />
To mount the disk image with ''udisksctl'', see [[Udisks#Mount loop devices]].<br />
<br />
===== With kpartx =====<br />
<br />
''kpartx'' from the {{Pkg|multipath-tools}} package can read a partition table on a device and create a new device for each partition. For example:<br />
<br />
# kpartx -a ''disk_image''<br />
<br />
This will setup the loopback device and create the necessary partition(s) device(s) in {{ic|/dev/mapper/}}.<br />
<br />
==== Mounting a partition from a qcow2 image ====<br />
<br />
We will use {{ic|qemu-nbd}}, which lets use the NBD (''network block device'') protocol to share the disk image.<br />
<br />
First, we need the ''nbd'' module loaded:<br />
<br />
# modprobe nbd max_part=16<br />
<br />
Then, we can share the disk and create the device entries:<br />
<br />
# qemu-nbd -c /dev/nbd0 ''/path/to/image.qcow2''<br />
<br />
Discover the partitions:<br />
<br />
# partprobe /dev/nbd0<br />
<br />
''fdisk'' can be used to get information regarding the different partitions in {{ic|''nbd0''}}:<br />
<br />
{{hc|# fdisk -l /dev/nbd0|2=<br />
Disk /dev/nbd0: 25.2 GiB, 27074281472 bytes, 52879456 sectors<br />
Units: sectors of 1 * 512 = 512 bytes<br />
Sector size (logical/physical): 512 bytes / 512 bytes<br />
I/O size (minimum/optimal): 512 bytes / 512 bytes<br />
Disklabel type: dos<br />
Disk identifier: 0xa6a4d542<br />
<br />
Device Boot Start End Sectors Size Id Type<br />
/dev/nbd0p1 * 2048 1026047 1024000 500M 7 HPFS/NTFS/exFAT<br />
/dev/nbd0p2 1026048 52877311 51851264 24.7G 7 HPFS/NTFS/exFAT}}<br />
<br />
Then mount any partition of the drive image, for example the partition 2:<br />
<br />
# mount /dev/nbd0'''p2''' ''mountpoint''<br />
<br />
After the usage, it is important to unmount the image and reverse previous steps, i.e. unmount the partition and disconnect the nbd device:<br />
<br />
# umount ''mountpoint''<br />
# qemu-nbd -d /dev/nbd0<br />
<br />
=== Using any real partition as the single primary partition of a hard disk image ===<br />
<br />
Sometimes, you may wish to use one of your system partitions from within QEMU. Using a raw partition for a virtual machine will improve performance, as the read and write operations do not go through the file system layer on the physical host. Such a partition also provides a way to share data between the host and guest.<br />
<br />
In Arch Linux, device files for raw partitions are, by default, owned by ''root'' and the ''disk'' group. If you would like to have a non-root user be able to read and write to a raw partition, you must either change the owner of the partition's device file to that user, add that user to the ''disk'' group, or use [[ACL]] for more fine-grained access control.<br />
<br />
{{Warning|<br />
* Although it is possible, it is not recommended to allow virtual machines to alter critical data on the host system, such as the root partition.<br />
* You must not mount a file system on a partition read-write on both the host and the guest at the same time. Otherwise, data corruption will result.<br />
}}<br />
<br />
After doing so, you can attach the partition to a QEMU virtual machine as a virtual disk.<br />
<br />
However, things are a little more complicated if you want to have the ''entire'' virtual machine contained in a partition. In that case, there would be no disk image file to actually boot the virtual machine since you cannot install a boot loader to a partition that is itself formatted as a file system and not as a partitioned device with an MBR. Such a virtual machine can be booted either by: [[#Specifying kernel and initrd manually]], [[#Simulating a virtual disk with MBR]], [[#Using the device-mapper]], [[#Using a linear RAID]] or [[#Using a Network Block Device]].<br />
<br />
==== Specifying kernel and initrd manually ====<br />
<br />
QEMU supports loading [[Kernels|Linux kernels]] and [[initramfs|init ramdisks]] directly, thereby circumventing boot loaders such as [[GRUB]]. It then can be launched with the physical partition containing the root file system as the virtual disk, which will not appear to be partitioned. This is done by issuing a command similar to the following:<br />
<br />
{{Note|In this example, it is the '''host's''' images that are being used, not the guest's. If you wish to use the guest's images, either mount {{ic|/dev/sda3}} read-only (to protect the file system from the host) and specify the {{ic|/full/path/to/images}} or use some kexec hackery in the guest to reload the guest's kernel (extends boot time). }}<br />
<br />
$ qemu-system-x86_64 -kernel /boot/vmlinuz-linux -initrd /boot/initramfs-linux.img -append root=/dev/sda /dev/sda3<br />
<br />
In the above example, the physical partition being used for the guest's root file system is {{ic|/dev/sda3}} on the host, but it shows up as {{ic|/dev/sda}} on the guest.<br />
<br />
You may, of course, specify any kernel and initrd that you want, and not just the ones that come with Arch Linux.<br />
<br />
When there are multiple [[kernel parameters]] to be passed to the {{ic|-append}} option, they need to be quoted using single or double quotes. For example:<br />
<br />
... -append 'root=/dev/sda1 console=ttyS0'<br />
<br />
==== Simulating a virtual disk with MBR ====<br />
<br />
A more complicated way to have a virtual machine use a physical partition, while keeping that partition formatted as a file system and not just having the guest partition the partition as if it were a disk, is to simulate an MBR for it so that it can boot using a boot loader such as GRUB.<br />
<br />
For the following, suppose you have a plain, unmounted {{ic|/dev/hda''N''}} partition with some file system on it you wish to make part of a QEMU disk image. The trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image. More generally, the partition can be any part of a larger simulated disk, in particular a block device that simulates the original physical disk but only exposes {{ic|/dev/hda''N''}} to the virtual machine.<br />
<br />
A virtual disk of this type can be represented by a VMDK file that contains references to (a copy of) the MBR and the partition, but QEMU does not support this VMDK format. For instance, a virtual disk [https://www.virtualbox.org/manual/ch09.html#rawdisk created by]<br />
<br />
$ VBoxManage internalcommands createrawvmdk -filename ''/path/to/file.vmdk'' -rawdisk /dev/hda<br />
<br />
will be rejected by QEMU with the error message<br />
<br />
Unsupported image type 'partitionedDevice'<br />
<br />
Note that {{ic|VBoxManage}} creates two files, {{ic|''file.vmdk''}} and {{ic|''file-pt.vmdk''}}, the latter being a copy of the MBR, to which the text file {{ic|file.vmdk}} points. Read operations outside the target partition or the MBR would give zeros, while written data would be discarded.<br />
<br />
===== Using the device-mapper =====<br />
<br />
A method that is similar to the use of a VMDK descriptor file uses the [https://docs.kernel.org/admin-guide/device-mapper/index.html device-mapper] to prepend a loop device attached to the MBR file to the target partition. In case we do not need our virtual disk to have the same size as the original, we first create a file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=2048<br />
<br />
Here, a 1 MiB (2048 * 512 bytes) file is created in accordance with partition alignment policies used by modern disk partitioning tools. For compatibility with older partitioning software, 63 sectors instead of 2048 might be required. The MBR only needs a single 512 bytes block, the additional free space can be used for a BIOS boot partition and, in the case of a hybrid partitioning scheme, for a GUID Partition Table. Then, we attach a loop device to the MBR file:<br />
<br />
# losetup --show -f ''/path/to/mbr''<br />
/dev/loop0<br />
<br />
In this example, the resulting device is {{ic|/dev/loop0}}. The device mapper is now used to join the MBR and the partition:<br />
<br />
# echo "0 2048 linear /dev/loop0 0<br />
2048 `blockdev --getsz /dev/hda''N''` linear /dev/hda''N'' 0" | dmsetup create qemu<br />
<br />
The resulting {{ic|/dev/mapper/qemu}} is what we will use as a QEMU raw disk image. Additional steps are required to create a partition table (see the section that describes the use of a linear RAID for an example) and boot loader code on the virtual disk (which will be stored in {{ic|''/path/to/mbr''}}).<br />
<br />
The following setup is an example where the position of {{ic|/dev/hda''N''}} on the virtual disk is to be the same as on the physical disk and the rest of the disk is hidden, except for the MBR, which is provided as a copy:<br />
<br />
# dd if=/dev/hda count=1 of=''/path/to/mbr''<br />
# loop=`losetup --show -f ''/path/to/mbr''`<br />
# start=`blockdev --report /dev/hda''N'' | tail -1 | awk '{print $5}'`<br />
# size=`blockdev --getsz /dev/hda''N''`<br />
# disksize=`blockdev --getsz /dev/hda`<br />
# echo "0 1 linear $loop 0<br />
1 $((start-1)) zero<br />
$start $size linear /dev/hda''N'' 0<br />
$((start+size)) $((disksize-start-size)) zero" | dmsetup create qemu<br />
<br />
The table provided as standard input to {{ic|dmsetup}} has a similar format as the table in a VDMK descriptor file produced by {{ic|VBoxManage}} and can alternatively be loaded from a file with {{ic|dmsetup create qemu --table ''table_file''}}. To the virtual machine, only {{ic|/dev/hda''N''}} is accessible, while the rest of the hard disk reads as zeros and discards written data, except for the first sector. We can print the table for {{ic|/dev/mapper/qemu}} with {{ic|dmsetup table qemu}} (use {{ic|udevadm info -rq name /sys/dev/block/''major'':''minor''}} to translate {{ic|''major'':''minor''}} to the corresponding {{ic|/dev/''blockdevice''}} name). Use {{ic|dmsetup remove qemu}} and {{ic|losetup -d $loop}} to delete the created devices.<br />
<br />
A situation where this example would be useful is an existing Windows XP installation in a multi-boot configuration and maybe a hybrid partitioning scheme (on the physical hardware, Windows XP could be the only operating system that uses the MBR partition table, while more modern operating systems installed on the same computer could use the GUID Partition Table). Windows XP supports hardware profiles, so that that the same installation can be used with different hardware configurations alternatingly (in this case bare metal vs. virtual) with Windows needing to install drivers for newly detected hardware only once for every profile. Note that in this example the boot loader code in the copied MBR needs to be updated to directly load Windows XP from {{ic|/dev/hda''N''}} instead of trying to start the multi-boot capable boot loader (like GRUB) present in the original system. Alternatively, a copy of the boot partition containing the boot loader installation can be included in the virtual disk the same way as the MBR.<br />
<br />
===== Using a linear RAID =====<br />
<br />
You can also do this using software [[RAID]] in linear mode (you need the {{ic|linear.ko}} kernel driver) and a loopback device: <br />
<br />
First, you create some small file to hold the MBR:<br />
<br />
$ dd if=/dev/zero of=''/path/to/mbr'' count=32<br />
<br />
Here, a 16 KiB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:<br />
<br />
# losetup -f ''/path/to/mbr''<br />
<br />
Let us assume the resulting device is {{ic|/dev/loop0}}, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + {{ic|/dev/hda''N''}} disk image using software RAID:<br />
<br />
# modprobe linear<br />
# mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hda''N''<br />
<br />
The resulting {{ic|/dev/md0}} is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of {{ic|/dev/hda''N''}} inside {{ic|/dev/md0}} (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using {{ic|fdisk}} on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non-kibibyte-roundable offsets (such as 31.5 KiB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:<br />
<br />
# fdisk /dev/md0<br />
<br />
Press {{ic|X}} to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.<br />
<br />
Now, press {{ic|R}} to return to the main menu.<br />
<br />
Press {{ic|P}} and check that the cylinder size is now 16k.<br />
<br />
Now, create a single primary partition corresponding to {{ic|/dev/hda''N''}}. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.<br />
<br />
Finally, 'w'rite the result to the file: you are done. You now have a partition you can mount directly from your host, as well as part of a QEMU disk image:<br />
<br />
$ qemu-system-x86_64 -hdc /dev/md0 ''[...]''<br />
<br />
You can, of course, safely set any boot loader on this disk image using QEMU, provided the original {{ic|/dev/hda''N''}} partition contains the necessary tools.<br />
<br />
===== Using a Network Block Device =====<br />
<br />
With [https://docs.kernel.org/admin-guide/blockdev/nbd.html Network Block Device], Linux can use a remote server as one of its block device. You may use {{ic|nbd-server}} (from the {{pkg|nbd}} package) to create an MBR wrapper for QEMU.<br />
<br />
Assuming you have already set up your MBR wrapper file like above, rename it to {{ic|wrapper.img.0}}. Then create a symbolic link named {{ic|wrapper.img.1}} in the same directory, pointing to your partition. Then put the following script in the same directory:<br />
<br />
#!/bin/sh<br />
dir="$(realpath "$(dirname "$0")")"<br />
cat >wrapper.conf <<EOF<br />
[generic]<br />
allowlist = true<br />
listenaddr = 127.713705<br />
port = 10809<br />
<br />
[wrap]<br />
exportname = $dir/wrapper.img<br />
multifile = true<br />
EOF<br />
<br />
nbd-server \<br />
-C wrapper.conf \<br />
-p wrapper.pid \<br />
"$@"<br />
<br />
The {{ic|.0}} and {{ic|.1}} suffixes are essential; the rest can be changed. After running the above script (which you may need to do as root to make sure nbd-server is able to access the partition), you can launch QEMU with:<br />
<br />
qemu-system-x86_64 -drive file=nbd:127.713705:10809:exportname=wrap ''[...]''<br />
<br />
=== Using an entire physical disk device inside the VM ===<br />
<br />
{{Style|Duplicates [[#Using any real partition as the single primary partition of a hard disk image]], libvirt instructions do not belong to this page.}}<br />
<br />
You may have a second hdd/ssd with a different OS (like Windows) on it and may want to gain the ability to also boot it inside a VM.<br />
Since the disk access is raw, the disk will perform quite well inside the VM.<br />
<br />
==== windows VM boot prerequisites ====<br />
<br />
Be sure to install the [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/archive-virtio/ virtio drivers] inside the OS on that disk before trying to boot it in the VM.<br />
For Win 7 use version [https://askubuntu.com/questions/1310440/using-virtio-win-drivers-with-win7-sp1-x64 0.1.173-4].<br />
Some singular drivers from newer virtio builds may be used on Win 7 but you will have to install them manually via device manager.<br />
For Win 10 you can use the latest virtio build.<br />
<br />
===== set up the windows disk interface drivers =====<br />
<br />
You may get a {{ic|0x0000007B}} bluescreen when trying to boot the VM. This means Windows can not access the drive during the early boot stage because the disk interface driver it would need for that is not loaded / is set to start manually.<br />
<br />
The solution is to [https://superuser.com/a/1032769 enable these drivers to start at boot].<br />
<br />
In {{ic|HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services}}, find the folders {{ic|aliide, amdide, atapi, cmdide, iastor (may not exist), iastorV, intelide, LSI_SAS, msahci, pciide and viaide}}.<br />
Inside each of those, set all their "start" values to 0 in order to enable them at boot.<br />
If your drive is a PCIe NVMe drive, also enable that driver (should it exist).<br />
<br />
==== find the unique path of your disk ====<br />
<br />
Run {{ic|ls /dev/disk/by-id/}}<br />
There you pick out the ID of the drive you want to insert into the VM, my disk ID is {{ic|ata-TS512GMTS930L_C199211383}}<br />
Now add that ID to {{ic|/dev/disk/by-id/}} so you get {{ic|/dev/disk/by-id/ata-TS512GMTS930L_C199211383}} .<br />
That is the unique path to that disk.<br />
<br />
==== add the disk in QEMU CLI ====<br />
<br />
In QEMU CLI that would probably be:<br />
<br />
{{ic|1=-drive file=/dev/disk/by-id/ata-TS512GMTS930L_C199211383,format=raw,media=disk}}<br />
<br />
Just modify "file=" to be the unique path of your drive.<br />
<br />
==== add the disk in libvirt ====<br />
<br />
In libvirt xml that translates to<br />
<br />
{{hc|$ virsh edit ''vmname''|<nowiki><br />
...<br />
<disk type="block" device="disk"><br />
<driver name="qemu" type="raw" cache="none" io="native"/><br />
<source dev="/dev/disk/by-id/ata-TS512GMTS930L_C199211383"/><br />
<target dev="sda" bus="sata"/><br />
<address type="drive" controller="0" bus="0" target="0" unit="0"/><br />
</disk><br />
...<br />
</nowiki>}}<br />
<br />
Just modify "source dev" to be the unique path of your drive.<br />
<br />
==== add the disk in virt-manager ====<br />
<br />
When creating a VM, select "import existing drive" and just paste that unique path.<br />
If you already have the VM, add a device, storage, then select or create custom storage.<br />
Now paste the unique path.<br />
<br />
== Networking ==<br />
<br />
{{Style|Network topologies (sections [[#Host-only networking]], [[#Internal networking]] and info spread out across other sections) should not be described alongside the various virtual interfaces implementations, such as [[#User-mode networking]], [[#Tap networking with QEMU]], [[#Networking with VDE2]].}}<br />
<br />
The performance of virtual networking should be better with tap devices and bridges than with user-mode networking or vde because tap devices and bridges are implemented in-kernel.<br />
<br />
In addition, networking performance can be improved by assigning virtual machines a [https://wiki.libvirt.org/page/Virtio virtio] network device rather than the default emulation of an e1000 NIC. See [[#Installing virtio drivers]] for more information.<br />
<br />
=== Link-level address caveat ===<br />
<br />
By giving the {{ic|-net nic}} argument to QEMU, it will, by default, assign a virtual machine a network interface with the link-level address {{ic|52:54:00:12:34:56}}. However, when using bridged networking with multiple virtual machines, it is essential that each virtual machine has a unique link-level (MAC) address on the virtual machine side of the tap device. Otherwise, the bridge will not work correctly, because it will receive packets from multiple sources that have the same link-level address. This problem occurs even if the tap devices themselves have unique link-level addresses because the source link-level address is not rewritten as packets pass through the tap device.<br />
<br />
Make sure that each virtual machine has a unique link-level address, but it should always start with {{ic|52:54:}}. Use the following option, replace ''X'' with arbitrary hexadecimal digit:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:''XX:XX:XX:XX'' -net vde ''disk_image''<br />
<br />
Generating unique link-level addresses can be done in several ways:<br />
<br />
* Manually specify unique link-level address for each NIC. The benefit is that the DHCP server will assign the same IP address each time the virtual machine is run, but it is unusable for large number of virtual machines.<br />
* Generate random link-level address each time the virtual machine is run. Practically zero probability of collisions, but the downside is that the DHCP server will assign a different IP address each time. You can use the following command in a script to generate random link-level address in a {{ic|macaddr}} variable:<br />
<br />
{{bc|1=<br />
printf -v macaddr "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr="$macaddr" -net vde ''disk_image''<br />
}}<br />
<br />
* Use the following script {{ic|qemu-mac-hasher.py}} to generate the link-level address from the virtual machine name using a hashing function. Given that the names of virtual machines are unique, this method combines the benefits of the aforementioned methods: it generates the same link-level address each time the script is run, yet it preserves the practically zero probability of collisions.<br />
<br />
{{hc|qemu-mac-hasher.py|2=<br />
#!/usr/bin/env python<br />
# usage: qemu-mac-hasher.py <VMName><br />
<br />
import sys<br />
import zlib<br />
<br />
crc = str(hex(zlib.crc32(sys.argv[1].encode("utf-8")))).replace("x", "")[-8:]<br />
print("52:54:%s%s:%s%s:%s%s:%s%s" % tuple(crc))}}<br />
<br />
In a script, you can use for example:<br />
<br />
vm_name="''VM Name''"<br />
qemu-system-x86_64 -name "$vm_name" -net nic,macaddr=$(qemu-mac-hasher.py "$vm_name") -net vde ''disk_image''<br />
<br />
=== User-mode networking ===<br />
<br />
By default, without any {{ic|-netdev}} arguments, QEMU will use user-mode networking with a built-in DHCP server. Your virtual machines will be assigned an IP address when they run their DHCP client, and they will be able to access the physical host's network through IP masquerading done by QEMU.<br />
<br />
{{Note|ICMPv6 will not work, as support for it is not implemented: {{ic|Slirp: external icmpv6 not supported yet}}. [[Ping]]ing an IPv6 address will not work.}}<br />
<br />
This default configuration allows your virtual machines to easily access the Internet, provided that the host is connected to it, but the virtual machines will not be directly visible on the external network, nor will virtual machines be able to talk to each other if you start up more than one concurrently.<br />
<br />
QEMU's user-mode networking can offer more capabilities such as built-in TFTP or SMB servers, redirecting host ports to the guest (for example to allow SSH connections to the guest) or attaching guests to VLANs so that they can talk to each other. See the QEMU documentation on the {{ic|-net user}} flag for more details.<br />
<br />
However, user-mode networking has limitations in both utility and performance. More advanced network configurations require the use of tap devices or other methods.<br />
<br />
{{Note|If the host system uses [[systemd-networkd]], make sure to symlink the {{ic|/etc/resolv.conf}} file as described in [[systemd-networkd#Required services and setup]], otherwise the DNS lookup in the guest system will not work.}}<br />
<br />
{{Tip|To use the virtio driver with user-mode networking, the option is: {{ic|1=-nic user,model=virtio-net-pci}}.}}<br />
<br />
=== Tap networking with QEMU ===<br />
<br />
[[wikipedia:TUN/TAP|Tap devices]] are a Linux kernel feature that allows you to create virtual network interfaces that appear as real network interfaces. Packets sent to a tap interface are delivered to a userspace program, such as QEMU, that has bound itself to the interface.<br />
<br />
QEMU can use tap networking for a virtual machine so that packets sent to the tap interface will be sent to the virtual machine and appear as coming from a network interface (usually an Ethernet interface) in the virtual machine. Conversely, everything that the virtual machine sends through its network interface will appear on the tap interface.<br />
<br />
Tap devices are supported by the Linux bridge drivers, so it is possible to bridge together tap devices with each other and possibly with other host interfaces such as {{ic|eth0}}. This is desirable if you want your virtual machines to be able to talk to each other, or if you want other machines on your LAN to be able to talk to the virtual machines.<br />
<br />
{{Warning|If you bridge together tap device and some host interface, such as {{ic|eth0}}, your virtual machines will appear directly on the external network, which will expose them to possible attack. Depending on what resources your virtual machines have access to, you may need to take all the [[Firewalls|precautions]] you normally would take in securing a computer to secure your virtual machines. If the risk is too great, virtual machines have little resources or you set up multiple virtual machines, a better solution might be to use [[#Host-only networking|host-only networking]] and set up NAT. In this case you only need one firewall on the host instead of multiple firewalls for each guest.}}<br />
<br />
As indicated in the user-mode networking section, tap devices offer higher networking performance than user-mode. If the guest OS supports virtio network driver, then the networking performance will be increased considerably as well. Supposing the use of the tap0 device, that the virtio driver is used on the guest, and that no scripts are used to help start/stop networking, next is part of the qemu command one should see:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no<br />
<br />
But if already using a tap device with virtio networking driver, one can even boost the networking performance by enabling vhost, like:<br />
<br />
-device virtio-net,netdev=network0 -netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on<br />
<br />
See [https://web.archive.org/web/20160222161955/http://www.linux-kvm.com:80/content/how-maximize-virtio-net-performance-vhost-net] for more information.<br />
<br />
==== Host-only networking ====<br />
<br />
If the bridge is given an IP address and traffic destined for it is allowed, but no real interface (e.g. {{ic|eth0}}) is connected to the bridge, then the virtual machines will be able to talk to each other and the host system. However, they will not be able to talk to anything on the external network, provided that you do not set up IP masquerading on the physical host. This configuration is called ''host-only networking'' by other virtualization software such as [[VirtualBox]].<br />
<br />
{{Tip|<br />
* If you want to set up IP masquerading, e.g. NAT for virtual machines, see the [[Internet sharing#Enable NAT]] page.<br />
* See [[Network bridge]] for information on creating bridge.<br />
* You may want to have a DHCP server running on the bridge interface to service the virtual network. For example, to use the {{ic|172.20.0.1/16}} subnet with [[dnsmasq]] as the DHCP server:<br />
<br />
{{bc|1=<br />
# ip addr add 172.20.0.1/16 dev br0<br />
# ip link set br0 up<br />
# dnsmasq --interface=br0 --bind-interfaces --dhcp-range=172.20.0.2,172.20.255.254<br />
}}<br />
}}<br />
<br />
==== Internal networking ====<br />
<br />
If you do not give the bridge an IP address and add an [[iptables]] rule to drop all traffic to the bridge in the INPUT chain, then the virtual machines will be able to talk to each other, but not to the physical host or to the outside network. This configuration is called ''internal networking'' by other virtualization software such as [[VirtualBox]]. You will need to either assign static IP addresses to the virtual machines or run a DHCP server on one of them.<br />
<br />
By default iptables would drop packets in the bridge network. You may need to use such iptables rule to allow packets in a bridged network:<br />
<br />
# iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Bridged networking using qemu-bridge-helper ====<br />
<br />
This method does not require a start-up script and readily accommodates multiple taps and multiple bridges. It uses {{ic|/usr/lib/qemu/qemu-bridge-helper}} binary, which allows creating tap devices on an existing bridge.<br />
<br />
{{Tip|<br />
* See [[Network bridge]] for information on creating bridge.<br />
* See https://wiki.qemu.org/Features/HelperNetworking for more information on QEMU's network helper.<br />
}}<br />
<br />
First, create a configuration file containing the names of all bridges to be used by QEMU:<br />
<br />
{{hc|/etc/qemu/bridge.conf|<br />
allow ''br0''<br />
allow ''br1''<br />
...}}<br />
<br />
Make sure {{ic|/etc/qemu/}} has {{ic|755}} [[permissions]]. [https://gitlab.com/qemu-project/qemu/-/issues/515 QEMU issues] and [https://www.gns3.com/community/discussions/gns3-cannot-work-with-qemu GNS3 issues] may arise if this is not the case.<br />
<br />
Now start the VM; the most basic usage to run QEMU with the default network helper and default bridge {{ic|br0}}:<br />
<br />
$ qemu-system-x86_64 -nic bridge ''[...]''<br />
<br />
Using the bridge {{ic|br1}} and the virtio driver:<br />
<br />
$ qemu-system-x86_64 -nic bridge,br=''br1'',model=virtio-net-pci ''[...]''<br />
<br />
==== Creating bridge manually ====<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
{{Tip|Since QEMU 1.1, the [https://wiki.qemu.org/Features/HelperNetworking network bridge helper] can set tun/tap up for you without the need for additional scripting. See [[#Bridged networking using qemu-bridge-helper]].}}<br />
<br />
The following describes how to bridge a virtual machine to a host interface such as {{ic|eth0}}, which is probably the most common configuration. This configuration makes it appear that the virtual machine is located directly on the external network, on the same Ethernet segment as the physical host machine.<br />
<br />
We will replace the normal Ethernet adapter with a bridge adapter and bind the normal Ethernet adapter to it.<br />
<br />
* Install {{Pkg|bridge-utils}}, which provides {{ic|brctl}} to manipulate bridges.<br />
<br />
* Enable IPv4 forwarding:<br />
<br />
# sysctl -w net.ipv4.ip_forward=1<br />
<br />
To make the change permanent, change {{ic|1=net.ipv4.ip_forward = 0}} to {{ic|1=net.ipv4.ip_forward = 1}} in {{ic|/etc/sysctl.d/99-sysctl.conf}}.<br />
<br />
* Load the {{ic|tun}} module and configure it to be loaded on boot. See [[Kernel modules]] for details.<br />
<br />
* Optionally create the bridge. See [[Bridge with netctl]] for details. Remember to name your bridge as {{ic|br0}}, or change the scripts below to your bridge's name. In the {{ic|run-qemu}} script below, {{ic|br0}} is set up if not listed, as it is assumed that by default the host is not accessing network via the bridge.<br />
<br />
* Create the script that QEMU uses to bring up the tap adapter with {{ic|root:kvm}} 750 permissions:<br />
<br />
{{hc|/etc/qemu-ifup|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifup"<br />
echo "Bringing up $1 for bridged mode..."<br />
sudo /usr/bin/ip link set $1 up promisc on<br />
echo "Adding $1 to br0..."<br />
sudo /usr/bin/brctl addif br0 $1<br />
sleep 2<br />
</nowiki>}}<br />
<br />
* Create the script that QEMU uses to bring down the tap adapter in {{ic|/etc/qemu-ifdown}} with {{ic|root:kvm}} 750 permissions:<br />
{{hc|/etc/qemu-ifdown|<nowiki><br />
#!/bin/sh<br />
<br />
echo "Executing /etc/qemu-ifdown"<br />
sudo /usr/bin/ip link set $1 down<br />
sudo /usr/bin/brctl delif br0 $1<br />
sudo /usr/bin/ip link delete dev $1<br />
</nowiki>}}<br />
<br />
* Use {{ic|visudo}} to add the following to your {{ic|sudoers}} file:<br />
<br />
{{bc|<nowiki><br />
Cmnd_Alias QEMU=/usr/bin/ip,/usr/bin/modprobe,/usr/bin/brctl<br />
%kvm ALL=NOPASSWD: QEMU<br />
</nowiki>}}<br />
<br />
* You launch QEMU using the following {{ic|run-qemu}} script:<br />
<br />
{{hc|run-qemu|<nowiki><br />
#!/bin/bash<br />
: '<br />
e.g. with img created via:<br />
qemu-img create -f qcow2 example.img 90G<br />
run-qemu -cdrom archlinux-x86_64.iso -boot order=d -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4<br />
run-qemu -drive file=example.img,format=qcow2 -m 4G -enable-kvm -cpu host -smp 4<br />
'<br />
<br />
nicbr0() {<br />
sudo ip link set dev $1 promisc on up &> /dev/null<br />
sudo ip addr flush dev $1 scope host &>/dev/null<br />
sudo ip addr flush dev $1 scope site &>/dev/null<br />
sudo ip addr flush dev $1 scope global &>/dev/null<br />
sudo ip link set dev $1 master br0 &> /dev/null<br />
}<br />
_nicbr0() {<br />
sudo ip link set $1 promisc off down &> /dev/null<br />
sudo ip link set dev $1 nomaster &> /dev/null<br />
}<br />
<br />
HASBR0="$( ip link show | grep br0 )"<br />
if [ -z $HASBR0 ] ; then<br />
ROUTER="192.168.1.1"<br />
SUBNET="192.168.1."<br />
NIC=$(ip link show | grep enp | grep 'state UP' | head -n 1 | cut -d":" -f 2 | xargs)<br />
IPADDR=$(ip addr show | grep -o "inet $SUBNET\([0-9]*\)" | cut -d ' ' -f2)<br />
sudo ip link add name br0 type bridge &> /dev/null<br />
sudo ip link set dev br0 up<br />
sudo ip addr add $IPADDR/24 brd + dev br0<br />
sudo ip route del default &> /dev/null<br />
sudo ip route add default via $ROUTER dev br0 onlink<br />
nicbr0 $NIC<br />
sudo iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
fi<br />
<br />
USERID=$(whoami)<br />
precreationg=$(ip tuntap list | cut -d: -f1 | sort)<br />
sudo ip tuntap add user $USERID mode tap<br />
postcreation=$(ip tuntap list | cut -d: -f1 | sort)<br />
TAP=$(comm -13 <(echo "$precreationg") <(echo "$postcreation"))<br />
nicbr0 $TAP<br />
<br />
printf -v MACADDR "52:54:%02x:%02x:%02x:%02x" $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff )) $(( $RANDOM & 0xff)) $(( $RANDOM & 0xff ))<br />
qemu-system-x86_64 -net nic,macaddr=$MACADDR,model=virtio \<br />
-net tap,ifname=$TAP,script=no,downscript=no,vhost=on \<br />
$@<br />
<br />
_nicbr0 $TAP<br />
sudo ip link set dev $TAP down &> /dev/null<br />
sudo ip tuntap del $TAP mode tap<br />
<br />
if [ -z $HASBR0 ] ; then<br />
_nicbr0 $NIC<br />
sudo ip addr del dev br0 $IPADDR/24 &> /dev/null<br />
sudo ip link set dev br0 down<br />
sudo ip link delete br0 type bridge &> /dev/null<br />
sudo ip route del default &> /dev/null<br />
sudo ip link set dev $NIC up<br />
sudo ip route add default via $ROUTER dev $NIC onlink &> /dev/null<br />
fi<br />
</nowiki>}}<br />
<br />
Then to launch a VM, do something like this<br />
<br />
$ run-qemu -hda ''myvm.img'' -m 512<br />
<br />
* It is recommended for performance and security reasons to disable the [http://ebtables.netfilter.org/documentation/bridge-nf.html firewall on the bridge]:<br />
<br />
{{hc|/etc/sysctl.d/10-disable-firewall-on-bridge.conf|<nowiki><br />
net.bridge.bridge-nf-call-ip6tables = 0<br />
net.bridge.bridge-nf-call-iptables = 0<br />
net.bridge.bridge-nf-call-arptables = 0<br />
</nowiki>}}<br />
<br />
In order to apply the parameters described above on boot, you will also need to load the br-netfilter module on boot. Otherwise, the parameters will not exist when sysctl will try to modify them.<br />
<br />
{{hc|/etc/modules-load.d/br_netfilter.conf|<nowiki><br />
br_netfilter<br />
</nowiki>}}<br />
<br />
Run {{ic|sysctl -p /etc/sysctl.d/10-disable-firewall-on-bridge.conf}} to apply the changes immediately.<br />
<br />
See the [https://wiki.libvirt.org/page/Networking#Creating_network_initscripts libvirt wiki] and [https://bugzilla.redhat.com/show_bug.cgi?id=512206 Fedora bug 512206]. If you get errors by sysctl during boot about non-existing files, make the {{ic|bridge}} module load at boot. See [[Kernel module#systemd]].<br />
<br />
Alternatively, you can configure [[iptables]] to allow all traffic to be forwarded across the bridge by adding a rule like this:<br />
<br />
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT<br />
<br />
==== Network sharing between physical device and a Tap device through iptables ====<br />
<br />
{{Merge|Internet_sharing|Duplication, not specific to QEMU.}}<br />
<br />
Bridged networking works fine between a wired interface (Eg. eth0), and it is easy to setup. However if the host gets connected to the network through a wireless device, then bridging is not possible.<br />
<br />
See [[Network bridge#Wireless interface on a bridge]] as a reference.<br />
<br />
One way to overcome that is to setup a tap device with a static IP, making linux automatically handle the routing for it, and then forward traffic between the tap interface and the device connected to the network through iptables rules.<br />
<br />
See [[Internet sharing]] as a reference.<br />
<br />
There you can find what is needed to share the network between devices, included tap and tun ones. The following just hints further on some of the host configurations required. As indicated in the reference above, the client needs to be configured for a static IP, using the IP assigned to the tap interface as the gateway. The caveat is that the DNS servers on the client might need to be manually edited if they change when changing from one host device connected to the network to another.<br />
<br />
To allow IP forwarding on every boot, one need to add the following lines to sysctl configuration file inside {{ic|/etc/sysctl.d}}:<br />
<br />
net.ipv4.ip_forward = 1<br />
net.ipv6.conf.default.forwarding = 1<br />
net.ipv6.conf.all.forwarding = 1<br />
<br />
The iptables rules can look like:<br />
<br />
# Forwarding from/to outside<br />
iptables -A FORWARD -i ${INT} -o ${EXT_0} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_1} -j ACCEPT<br />
iptables -A FORWARD -i ${INT} -o ${EXT_2} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_0} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_1} -o ${INT} -j ACCEPT<br />
iptables -A FORWARD -i ${EXT_2} -o ${INT} -j ACCEPT<br />
# NAT/Masquerade (network address translation)<br />
iptables -t nat -A POSTROUTING -o ${EXT_0} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_1} -j MASQUERADE<br />
iptables -t nat -A POSTROUTING -o ${EXT_2} -j MASQUERADE<br />
<br />
The prior supposes there are 3 devices connected to the network sharing traffic with one internal device, where for example:<br />
<br />
INT=tap0<br />
EXT_0=eth0<br />
EXT_1=wlan0<br />
EXT_2=tun0<br />
<br />
The prior shows a forwarding that would allow sharing wired and wireless connections with the tap device.<br />
<br />
The forwarding rules shown are stateless, and for pure forwarding. One could think of restricting specific traffic, putting a firewall in place to protect the guest and others. However those would decrease the networking performance, while a simple bridge does not include any of that.<br />
<br />
Bonus: Whether the connection is wired or wireless, if one gets connected through VPN to a remote site with a tun device, supposing the tun device opened for that connection is tun0, and the prior iptables rules are applied, then the remote connection gets also shared with the guest. This avoids the need for the guest to also open a VPN connection. Again, as the guest networking needs to be static, then if connecting the host remotely this way, one most probably will need to edit the DNS servers on the guest.<br />
<br />
=== Networking with VDE2 ===<br />
<br />
{{Style|This section needs serious cleanup and may contain out-of-date information.}}<br />
<br />
==== What is VDE? ====<br />
<br />
VDE stands for Virtual Distributed Ethernet. It started as an enhancement of [[User-mode Linux|uml]]_switch. It is a toolbox to manage virtual networks.<br />
<br />
The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration we show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. You are invited to read [https://wiki.virtualsquare.org/ the documentation of the project].<br />
<br />
The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.<br />
<br />
==== Basics ====<br />
<br />
VDE support can be [[install]]ed via the {{Pkg|vde2}} package.<br />
<br />
In our config, we use tun/tap to create a virtual interface on my host. Load the {{ic|tun}} module (see [[Kernel modules]] for details):<br />
<br />
# modprobe tun<br />
<br />
Now create the virtual switch:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
<br />
This line creates the switch, creates {{ic|tap0}}, "plugs" it, and allows the users of the group {{ic|users}} to use it.<br />
<br />
The interface is plugged in but not configured yet. To configure it, run this command:<br />
<br />
# ip addr add 192.168.100.254/24 dev tap0<br />
<br />
Now, you just have to run KVM with these {{ic|-net}} options as a normal user:<br />
<br />
$ qemu-system-x86_64 -net nic -net vde -hda ''[...]''<br />
<br />
Configure networking for your guest as you would do in a physical network.<br />
<br />
{{Tip|You might want to set up NAT on tap device to access the internet from the virtual machine. See [[Internet sharing#Enable NAT]] for more information.}}<br />
<br />
==== Startup scripts ====<br />
<br />
Example of main script starting VDE:<br />
<br />
{{hc|/etc/systemd/scripts/qemu-network-env|<nowiki><br />
#!/bin/sh<br />
# QEMU/VDE network environment preparation script<br />
<br />
# The IP configuration for the tap device that will be used for<br />
# the virtual machine network:<br />
<br />
TAP_DEV=tap0<br />
TAP_IP=192.168.100.254<br />
TAP_MASK=24<br />
TAP_NETWORK=192.168.100.0<br />
<br />
# Host interface<br />
NIC=eth0<br />
<br />
case "$1" in<br />
start)<br />
echo -n "Starting VDE network for QEMU: "<br />
<br />
# If you want tun kernel module to be loaded by script uncomment here<br />
#modprobe tun 2>/dev/null<br />
## Wait for the module to be loaded<br />
#while ! lsmod | grep -q "^tun"; do echo "Waiting for tun device"; sleep 1; done<br />
<br />
# Start tap switch<br />
vde_switch -tap "$TAP_DEV" -daemon -mod 660 -group users<br />
<br />
# Bring tap interface up<br />
ip address add "$TAP_IP"/"$TAP_MASK" dev "$TAP_DEV"<br />
ip link set "$TAP_DEV" up<br />
<br />
# Start IP Forwarding<br />
echo "1" > /proc/sys/net/ipv4/ip_forward<br />
iptables -t nat -A POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
;;<br />
stop)<br />
echo -n "Stopping VDE network for QEMU: "<br />
# Delete the NAT rules<br />
iptables -t nat -D POSTROUTING -s "$TAP_NETWORK"/"$TAP_MASK" -o "$NIC" -j MASQUERADE<br />
<br />
# Bring tap interface down<br />
ip link set "$TAP_DEV" down<br />
<br />
# Kill VDE switch<br />
pgrep vde_switch | xargs kill -TERM<br />
;;<br />
restart|reload)<br />
$0 stop<br />
sleep 1<br />
$0 start<br />
;;<br />
*)<br />
echo "Usage: $0 {start|stop|restart|reload}"<br />
exit 1<br />
esac<br />
exit 0<br />
</nowiki>}}<br />
<br />
Example of systemd service using the above script:<br />
<br />
{{hc|/etc/systemd/system/qemu-network-env.service|<nowiki><br />
[Unit]<br />
Description=Manage VDE Switch<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/etc/systemd/scripts/qemu-network-env start<br />
ExecStop=/etc/systemd/scripts/qemu-network-env stop<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Change permissions for {{ic|qemu-network-env}} to be [[executable]]. <br />
<br />
You can [[start]] {{ic|qemu-network-env.service}} as usual.<br />
<br />
==== Alternative method ====<br />
<br />
If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq, and iptables you can do the following for the same result.<br />
<br />
# vde_switch -daemon -mod 660 -group users<br />
# slirpvde --dhcp --daemon<br />
<br />
Then, to start the VM with a connection to the network of the host:<br />
<br />
$ qemu-system-x86_64 -net nic,macaddr=52:54:00:00:EE:03 -net vde ''disk_image''<br />
<br />
=== VDE2 Bridge ===<br />
<br />
Based on [https://selamatpagicikgu.wordpress.com/2011/06/08/quickhowto-qemu-networking-using-vde-tuntap-and-bridge/ quickhowto: qemu networking using vde, tun/tap, and bridge] graphic. Any virtual machine connected to vde is externally exposed. For example, each virtual machine can receive DHCP configuration directly from your ADSL router.<br />
<br />
==== Basics ====<br />
<br />
Remember that you need {{ic|tun}} module and {{Pkg|bridge-utils}} package.<br />
<br />
Create the vde2/tap device:<br />
<br />
# vde_switch -tap tap0 -daemon -mod 660 -group users<br />
# ip link set tap0 up<br />
<br />
Create bridge:<br />
<br />
# brctl addbr br0<br />
<br />
Add devices:<br />
<br />
# brctl addif br0 eth0<br />
# brctl addif br0 tap0<br />
<br />
And configure bridge interface:<br />
<br />
# dhcpcd br0<br />
<br />
==== Startup scripts ====<br />
<br />
All devices must be set up. And only the bridge needs an IP address. For physical devices on the bridge (e.g. {{ic|eth0}}), this can be done with [[netctl]] using a custom Ethernet profile with:<br />
<br />
{{hc|/etc/netctl/ethernet-noip|2=<br />
Description='A more versatile static Ethernet connection'<br />
Interface=eth0<br />
Connection=ethernet<br />
IP=no<br />
}}<br />
<br />
The following custom systemd service can be used to create and activate a VDE2 tap interface for users in the {{ic|users}} user group.<br />
<br />
{{hc|/etc/systemd/system/vde2@.service|2=<br />
[Unit]<br />
Description=Network Connectivity for %i<br />
Wants=network.target<br />
Before=network.target<br />
<br />
[Service]<br />
Type=oneshot<br />
RemainAfterExit=yes<br />
ExecStart=/usr/bin/vde_switch -tap %i -daemon -mod 660 -group users<br />
ExecStart=/usr/bin/ip link set dev %i up<br />
ExecStop=/usr/bin/ip addr flush dev %i<br />
ExecStop=/usr/bin/ip link set dev %i down<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
And finally, you can create the [[Bridge with netctl|bridge interface with netctl]].<br />
<br />
=== Shorthand configuration ===<br />
<br />
If you are using QEMU with various networking options a lot, you probably have created a lot of {{ic|-netdev}} and {{ic|-device}} argument pairs, which gets quite repetitive. You can instead use the {{ic|-nic}} argument to combine {{ic|-netdev}} and {{ic|-device}} together, so that, for example, these arguments:<br />
<br />
-netdev tap,id=network0,ifname=tap0,script=no,downscript=no,vhost=on -device virtio-net-pci,netdev=network0<br />
<br />
become:<br />
<br />
-nic tap,script=no,downscript=no,vhost=on,model=virtio-net-pci<br />
<br />
Notice the lack of network IDs, and that the device was created with {{ic|1=model=}}. The first half of the {{ic|-nic}} parameters are {{ic|-netdev}} parameters, whereas the second half (after {{ic|1=model=}}) are related with the device. The same parameters (for example, {{ic|1=smb=}}) are used. To completely disable the networking use {{ic|-nic none}}.<br />
<br />
See [https://qemu.weilnetz.de/doc/6.0/system/net.html QEMU networking documentation] for more information on parameters you can use.<br />
<br />
== Graphic card ==<br />
<br />
QEMU can emulate a standard graphic card text mode using {{ic|-curses}} command line option. This allows to type text and see text output directly inside a text terminal. Alternatively, {{ic|-nographic}} serves a similar purpose.<br />
<br />
QEMU can emulate several types of VGA card. The card type is passed in the {{ic|-vga ''type''}} command line option and can be {{ic|std}}, {{ic|qxl}}, {{ic|vmware}}, {{ic|virtio}}, {{ic|cirrus}} or {{ic|none}}.<br />
<br />
=== std ===<br />
<br />
With {{ic|-vga std}} you can get a resolution of up to 2560 x 1600 pixels without requiring guest drivers. This is the default since QEMU 2.2.<br />
<br />
=== qxl ===<br />
<br />
QXL is a paravirtual graphics driver with 2D support. To use it, pass the {{ic|-vga qxl}} option and install drivers in the guest. You may want to use [[#SPICE]] for improved graphical performance when using QXL.<br />
<br />
On Linux guests, the {{ic|qxl}} and {{ic|bochs_drm}} kernel modules must be loaded in order to gain a decent performance.<br />
<br />
Default VGA memory size for QXL devices is 16M which is sufficient to drive resolutions approximately up to QHD (2560x1440). To enable higher resolutions, [[#Multi-monitor support|increase vga_memmb]].<br />
<br />
=== vmware ===<br />
<br />
Although it is a bit buggy, it performs better than std and cirrus. Install the VMware drivers {{Pkg|xf86-video-vmware}} and {{Pkg|xf86-input-vmmouse}} for Arch Linux guests.<br />
<br />
=== virtio ===<br />
<br />
{{ic|virtio-vga}} / {{ic|virtio-gpu}} is a paravirtual 3D graphics driver based on [https://virgil3d.github.io/ virgl]. Currently a work in progress, supporting only very recent (>= 4.4) Linux guests with {{Pkg|mesa}} (>=11.2) compiled with the option {{ic|1=gallium-drivers=virgl}}.<br />
<br />
To enable 3D acceleration on the guest system select this vga with {{ic|-device virtio-vga-gl}} and enable the opengl context in the display device with {{ic|1=-display sdl,gl=on}} or {{ic|1=-display gtk,gl=on}} for the sdl and gtk display output respectively. Successful configuration can be confirmed looking at the kernel log in the guest:<br />
<br />
{{hc|# dmesg {{!}} grep drm |<br />
[drm] pci: virtio-vga detected<br />
[drm] virgl 3d acceleration enabled<br />
}}<br />
<br />
=== cirrus ===<br />
<br />
The cirrus graphical adapter was the default [https://wiki.qemu.org/ChangeLog/2.2#VGA before 2.2]. It [https://www.kraxel.org/blog/2014/10/qemu-using-cirrus-considered-harmful/ should not] be used on modern systems.<br />
<br />
=== none ===<br />
<br />
This is like a PC that has no VGA card at all. You would not even be able to access it with the {{ic|-vnc}} option. Also, this is different from the {{ic|-nographic}} option which lets QEMU emulate a VGA card, but disables the SDL display.<br />
<br />
== SPICE ==<br />
<br />
The [https://www.spice-space.org/ SPICE project] aims to provide a complete open source solution for remote access to virtual machines in a seamless way.<br />
<br />
=== Enabling SPICE support on the host ===<br />
<br />
The following is an example of booting with SPICE as the remote desktop protocol, including the support for copy and paste from host:<br />
<br />
$ qemu-system-x86_64 -vga qxl -device virtio-serial-pci -spice port=5930,disable-ticketing=on -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
The parameters have the following meaning:<br />
<br />
# {{ic|-device virtio-serial-pci}} adds a virtio-serial device<br />
# {{ic|1=-spice port=5930,disable-ticketing=on}} set TCP port {{ic|5930}} for spice channels listening and allow client to connect without authentication{{Tip|Using [[wikipedia:Unix_socket|Unix sockets]] instead of TCP ports does not involve using network stack on the host system. It does not imply that packets are encapsulated and decapsulated to use the network and the related protocol. The sockets are identified solely by the inodes on the hard drive. It is therefore considered better for performance. Use instead {{ic|1=-spice unix=on,addr=/tmp/vm_spice.socket,disable-ticketing=on}}.}}<br />
# {{ic|1=-device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0}} opens a port for spice vdagent in the virtio-serial device,<br />
# {{ic|1=-chardev spicevmc,id=spicechannel0,name=vdagent}} adds a spicevmc chardev for that port. It is important that the {{ic|1=chardev=}} option of the {{ic|virtserialport}} device matches the {{ic|1=id=}} option given to the {{ic|chardev}} option ({{ic|spicechannel0}} in this example). It is also important that the port name is {{ic|com.redhat.spice.0}}, because that is the namespace where vdagent is looking for in the guest. And finally, specify {{ic|1=name=vdagent}} so that spice knows what this channel is for.<br />
<br />
=== Connecting to the guest with a SPICE client ===<br />
<br />
A SPICE client is necessary to connect to the guest. In Arch, the following clients are available:<br />
<br />
{{App|virt-viewer|SPICE client recommended by the protocol developers, a subset of the virt-manager project.|https://virt-manager.org/|{{Pkg|virt-viewer}}}}<br />
<br />
{{App|spice-gtk|SPICE GTK client, a subset of the SPICE project. Embedded into other applications as a widget.|https://www.spice-space.org/|{{Pkg|spice-gtk}}}}<br />
<br />
For clients that run on smartphone or on other platforms, refer to the ''Other clients'' section in [https://www.spice-space.org/download.html spice-space download].<br />
<br />
==== Manually running a SPICE client ====<br />
<br />
One way of connecting to a guest listening on Unix socket {{ic|/tmp/vm_spice.socket}} is to manually run the SPICE client using {{ic|$ remote-viewer spice+unix:///tmp/vm_spice.socket}} or {{ic|1=$ spicy --uri="spice+unix:///tmp/vm_spice.socket"}}, depending on the desired client. Since QEMU in SPICE mode acts similarly to a remote desktop server, it may be more convenient to run QEMU in daemon mode with the {{ic|-daemonize}} parameter.<br />
<br />
{{Tip|To connect to the guest through SSH tunelling, the following type of command can be used: {{bc|$ ssh -fL 5999:localhost:5930 ''my.domain.org'' sleep 10; spicy -h 127.0.0.1 -p 5999}}<br />
This example connects ''spicy'' to the local port {{ic|5999}} which is forwarded through SSH to the guest's SPICE server located at the address ''my.domain.org'', port {{ic|5930}}.<br />
Note the {{ic|-f}} option that requests ssh to execute the command {{ic|sleep 10}} in the background. This way, the ssh session runs while the client is active and auto-closes once the client ends.<br />
}}<br />
<br />
==== Running a SPICE client with QEMU ====<br />
<br />
QEMU can automatically start a SPICE client with an appropriate socket, if the display is set to SPICE with the {{ic|-display spice-app}} parameter. This will use the system's default SPICE client as the viewer, determined by your [[XDG MIME Applications#mimeapps.list|mimeapps.list]] files.<br />
<br />
=== Enabling SPICE support on the guest ===<br />
<br />
For '''Arch Linux guests''', for improved support for multiple monitors or clipboard sharing, the following packages should be installed:<br />
* {{Pkg|spice-vdagent}}: Spice agent xorg client that enables copy and paste between client and X-session and more. (Refer to this [https://github.com/systemd/systemd/issues/18791 issue], until fixed, for workarounds to get this to work on non-GNOME desktops.)<br />
* {{Pkg|xf86-video-qxl}}: Xorg X11 qxl video driver<br />
For guests under '''other operating systems''', refer to the ''Guest'' section in spice-space [https://www.spice-space.org/download.html download].<br />
<br />
=== Password authentication with SPICE ===<br />
<br />
If you want to enable password authentication with SPICE you need to remove {{ic|disable-ticketing}} from the {{ic|-spice}} argument and instead add {{ic|1=password=''yourpassword''}}. For example:<br />
<br />
$ qemu-system-x86_64 -vga qxl -spice port=5900,password=''yourpassword'' -device virtio-serial-pci -device virtserialport,chardev=spicechannel0,name=com.redhat.spice.0 -chardev spicevmc,id=spicechannel0,name=vdagent<br />
<br />
Your SPICE client should now ask for the password to be able to connect to the SPICE server.<br />
<br />
=== TLS encrypted communication with SPICE ===<br />
<br />
You can also configure TLS encryption for communicating with the SPICE server. First, you need to have a directory which contains the following files (the names must be exactly as indicated):<br />
<br />
* {{ic|ca-cert.pem}}: the CA master certificate.<br />
* {{ic|server-cert.pem}}: the server certificate signed with {{ic|ca-cert.pem}}.<br />
* {{ic|server-key.pem}}: the server private key.<br />
<br />
An example of generation of self-signed certificates with your own generated CA for your server is shown in the [https://www.spice-space.org/spice-user-manual.html#_generating_self_signed_certificates Spice User Manual].<br />
<br />
Afterwards, you can run QEMU with SPICE as explained above but using the following {{ic|-spice}} argument: {{ic|1=-spice tls-port=5901,password=''yourpassword'',x509-dir=''/path/to/pki_certs''}}, where {{ic|''/path/to/pki_certs''}} is the directory path that contains the three needed files shown earlier.<br />
<br />
It is now possible to connect to the server using {{pkg|virt-viewer}}:<br />
<br />
$ remote-viewer spice://''hostname''?tls-port=5901 --spice-ca-file=''/path/to/ca-cert.pem'' --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
Keep in mind that the {{ic|--spice-host-subject}} parameter needs to be set according to your {{ic|server-cert.pem}} subject. You also need to copy {{ic|ca-cert.pem}} to every client to verify the server certificate.<br />
<br />
{{Tip|You can get the subject line of the server certificate in the correct format for {{ic|--spice-host-subject}} (with entries separated by commas) using the following command: {{bc|<nowiki>$ openssl x509 -noout -subject -in server-cert.pem | cut -d' ' -f2- | sed 's/\///' | sed 's/\//,/g'</nowiki>}}<br />
}}<br />
<br />
The equivalent {{Pkg|spice-gtk}} command is:<br />
<br />
$ spicy -h ''hostname'' -s 5901 --spice-ca-file=ca-cert.pem --spice-host-subject="C=''XX'',L=''city'',O=''organization'',CN=''hostname''" --spice-secure-channels=all<br />
<br />
== VNC ==<br />
<br />
One can add the {{ic|-vnc :''X''}} option to have QEMU redirect the VGA display to the VNC session. Substitute {{ic|''X''}} for the number of the display (0 will then listen on 5900, 1 on 5901...).<br />
<br />
$ qemu-system-x86_64 -vnc :0<br />
<br />
An example is also provided in the [[#Starting QEMU virtual machines on boot]] section.<br />
{{Warning|The default VNC server setup does not use any form of authentication. Any user can connect from any host.}}<br />
<br />
=== Basic password authentication ===<br />
<br />
An access password can be setup easily by using the {{ic|password}} option. The password must be indicated in the QEMU monitor and connection is only possible once the password is provided.<br />
<br />
$ qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
In the QEMU monitor, password is set using the command {{ic|change vnc password}} and then indicating the password.<br />
<br />
The following command line directly runs vnc with a password:<br />
<br />
$ printf "change vnc password\n%s\n" MYPASSWORD | qemu-system-x86_64 -vnc :0,password -monitor stdio<br />
<br />
{{Note|The password is limited to 8 characters and can be guessed through brute force attack. More elaborated protection is strongly recommended for public network.}}<br />
<br />
== Audio ==<br />
<br />
=== Creating an audio backend ===<br />
<br />
The {{ic|-audiodev}} flag sets the audio backend driver on the host and its options. The list of available audio backend drivers and their optional settings is detailed in the {{man|1|qemu}} man page.<br />
<br />
At the bare minimum, one need to choose an audio backend and set an id, for [[PulseAudio]] for example:<br />
<br />
-audiodev pa,id=snd0<br />
<br />
=== Using the audio backend ===<br />
<br />
==== Intel HD Audio ====<br />
<br />
For Intel HD Audio emulation, add both controller and codec devices. To list the available Intel HDA Audio devices:<br />
<br />
$ qemu-system-x86_64 -device help | grep hda<br />
<br />
Add the audio controller:<br />
<br />
-device ich9-intel-hda<br />
<br />
Also add the audio codec and map it to a host audio backend id:<br />
<br />
-device hda-output,audiodev=snd0<br />
<br />
==== Intel 82801AA AC97 ====<br />
<br />
For AC97 emulation just add the audio card device and map it to a host audio backend id<br />
<br />
-device AC97,audiodev=snd0<br />
<br />
{{Note|<br />
* If the audiodev backend is not provided, QEMU looks up for it and adds it automatically, this only works for a single audiodev. For example {{ic|-device intel-hda -device hda-duplex}} will emulate {{ic|intel-hda}} on the guest using the default audiodev backend.<br />
* Video graphic card emulated drivers for the guest machine may also cause a problem with the sound quality. Test one by one to make it work. You can list possible options with {{ic|<nowiki>qemu-system-x86_64 -h | grep vga</nowiki>}}.}}<br />
<br />
== Installing virtio drivers ==<br />
<br />
QEMU offers guests the ability to use paravirtualized block and network devices using the [https://wiki.libvirt.org/page/Virtio virtio] drivers, which provide better performance and lower overhead.<br />
<br />
* A virtio block device requires the option {{Ic|-drive}} for passing a disk image, with parameter {{Ic|1=if=virtio}}:<br />
$ qemu-system-x86_64 -drive file=''disk_image'',if='''virtio'''<br />
<br />
* Almost the same goes for the network:<br />
$ qemu-system-x86_64 -nic user,model='''virtio-net-pci'''<br />
<br />
{{Note|This will only work if the guest machine has drivers for virtio devices. Linux does, and the required drivers are included in Arch Linux, but there is no guarantee that virtio devices will work with other operating systems.}}<br />
<br />
=== Preparing an Arch Linux guest ===<br />
<br />
To use virtio devices after an Arch Linux guest has been installed, the following modules must be loaded in the guest: {{Ic|virtio}}, {{Ic|virtio_pci}}, {{Ic|virtio_blk}}, {{Ic|virtio_net}}, and {{Ic|virtio_ring}}. For 32-bit guests, the specific "virtio" module is not necessary.<br />
<br />
If you want to boot from a virtio disk, the initial ramdisk must contain the necessary modules. By default, this is handled by [[mkinitcpio]]'s {{ic|autodetect}} hook. Otherwise use the {{ic|MODULES}} array in {{ic|/etc/mkinitcpio.conf}} to include the necessary modules and rebuild the initial ramdisk.<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(virtio virtio_blk virtio_pci virtio_net)}}<br />
<br />
Virtio disks are recognized with the prefix {{ic|'''v'''}} (e.g. {{ic|'''v'''da}}, {{ic|'''v'''db}}, etc.); therefore, changes must be made in at least {{ic|/etc/fstab}} and {{ic|/boot/grub/grub.cfg}} when booting from a virtio disk.<br />
<br />
{{Tip|When referencing disks by [[UUID]] in both {{ic|/etc/fstab}} and boot loader, nothing has to be done.}}<br />
<br />
Further information on paravirtualization with KVM can be found [https://www.linux-kvm.org/page/Boot_from_virtio_block_device here].<br />
<br />
You might also want to install {{Pkg|qemu-guest-agent}} to implement support for QMP commands that will enhance the hypervisor management capabilities.<br />
<br />
=== Preparing a Windows guest ===<br />
<br />
==== Virtio drivers for Windows ====<br />
<br />
Windows does not come with the virtio drivers. The latest and stable versions of the drivers are regularly built by Fedora, details on downloading the drivers are given on [https://github.com/virtio-win/virtio-win-pkg-scripts/blob/master/README.md virtio-win on GitHub]. In the following sections we will mostly use the stable ISO file provided here: [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/stable-virtio/virtio-win.iso virtio-win.iso]. Alternatively, use {{AUR|virtio-win}}.<br />
<br />
==== Block device drivers ====<br />
<br />
===== New Install of Windows =====<br />
<br />
The drivers need to be loaded during installation, the procedure is to load the ISO image with the virtio drivers in a cdrom device along with the primary disk device and the Windows ISO install media:<br />
<br />
$ qemu-system-x86_64 ... \<br />
-drive file=''disk_image'',index=0,media=disk,if=virtio \<br />
-drive file=''windows.iso'',index=2,media=cdrom \<br />
-drive file=''virtio-win.iso'',index=3,media=cdrom \<br />
...<br />
<br />
During the installation, at some stage, the Windows installer will ask "Where do you want to install Windows?", it will give a warning that no disks are found. Follow the example instructions below (based on Windows Server 2012 R2 with Update).<br />
<br />
* Select the option ''Load Drivers''.<br />
* Uncheck the box for ''Hide drivers that are not compatible with this computer's hardware''.<br />
* Click the browse button and open the CDROM for the virtio iso, usually named "virtio-win-XX".<br />
* Now browse to {{ic|E:\viostor\[your-os]\amd64}}, select it, and confirm.<br />
<br />
You should now see your virtio disk(s) listed here, ready to be selected, formatted and installed to.<br />
<br />
===== Change existing Windows VM to use virtio =====<br />
<br />
Modifying an existing Windows guest for booting from virtio disk requires that the virtio driver is loaded by the guest at boot time.<br />
We will therefore need to teach Windows to load the virtio driver at boot time before being able to boot a disk image in virtio mode.<br />
<br />
To achieve that, first create a new disk image that will be attached in virtio mode and trigger the search for the driver:<br />
<br />
$ qemu-img create -f qcow2 ''dummy.qcow2'' 1G<br />
<br />
Run the original Windows guest with the boot disk still in IDE mode, the fake disk in virtio mode and the driver ISO image.<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=ide -drive file=''dummy.qcow2'',if=virtio -cdrom virtio-win.iso<br />
<br />
Windows will detect the fake disk and look for a suitable driver. If it fails, go to ''Device Manager'', locate the SCSI drive with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not navigate to the driver folder within the CD-ROM, simply select the CD-ROM drive and Windows will find the appropriate driver automatically (tested for Windows 7 SP1).<br />
<br />
Request Windows to boot in safe mode next time it starts up. This can be done using the ''msconfig.exe'' tool in Windows. In safe mode all the drivers will be loaded at boot time including the new virtio driver. Once Windows knows that the virtio driver is required at boot it will memorize it for future boot.<br />
<br />
Once instructed to boot in safe mode, you can turn off the virtual machine and launch it again, now with the boot disk attached in virtio mode:<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''disk_image'',if=virtio<br />
<br />
You should boot in safe mode with virtio driver loaded, you can now return to ''msconfig.exe'' disable safe mode boot and restart Windows.<br />
<br />
{{Note|If you encounter the blue screen of death using the {{ic|1=if=virtio}} parameter, it probably means the virtio disk driver is not installed or not loaded at boot time, reboot in safe mode and check your driver configuration.}}<br />
<br />
==== Network drivers ====<br />
<br />
Installing virtio network drivers is a bit easier, simply add the {{ic|-nic}} argument.<br />
<br />
$ qemu-system-x86_64 -m 4G -drive file=''windows_disk_image'',if=virtio -nic user,model=virtio-net-pci -cdrom virtio-win.iso<br />
<br />
Windows will detect the network adapter and try to find a driver for it. If it fails, go to the ''Device Manager'', locate the network adapter with an exclamation mark icon (should be open), click ''Update driver'' and select the virtual CD-ROM. Do not forget to select the checkbox which says to search for directories recursively.<br />
<br />
==== Balloon driver ====<br />
<br />
If you want to track you guest memory state (for example via {{ic|virsh}} command {{ic|dommemstat}}) or change guest's memory size in runtime (you still will not be able to change memory size, but can limit memory usage via inflating balloon driver) you will need to install guest balloon driver.<br />
<br />
For this you will need to go to ''Device Manager'', locate ''PCI standard RAM Controller'' in ''System devices'' (or unrecognized PCI controller from ''Other devices'') and choose ''Update driver''. In opened window you will need to choose ''Browse my computer...'' and select the CD-ROM (and do not forget the ''Include subdirectories'' checkbox). Reboot after installation. This will install the driver and you will be able to inflate the balloon (for example via hmp command {{ic|balloon ''memory_size''}}, which will cause balloon to take as much memory as possible in order to shrink the guest's available memory size to ''memory_size''). However, you still will not be able to track guest memory state. In order to do this you will need to install ''Balloon'' service properly. For that open command line as administrator, go to the CD-ROM, ''Balloon'' directory and deeper, depending on your system and architecture. Once you are in ''amd64'' (''x86'') directory, run {{ic|blnsrv.exe -i}} which will do the installation. After that {{ic|virsh}} command {{ic|dommemstat}} should be outputting all supported values.<br />
<br />
=== Preparing a FreeBSD guest ===<br />
<br />
Install the {{ic|emulators/virtio-kmod}} port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your {{ic|/boot/loader.conf}} file:<br />
<br />
{{bc|1=<br />
virtio_load="YES"<br />
virtio_pci_load="YES"<br />
virtio_blk_load="YES"<br />
if_vtnet_load="YES"<br />
virtio_balloon_load="YES"<br />
}}<br />
<br />
Then modify your {{ic|/etc/fstab}} by doing the following:<br />
<br />
# sed -ibak "s/ada/vtbd/g" /etc/fstab<br />
<br />
And verify that {{ic|/etc/fstab}} is consistent. If anything goes wrong, just boot into a rescue CD and copy {{ic|/etc/fstab.bak}} back to {{ic|/etc/fstab}}.<br />
<br />
== QEMU monitor ==<br />
<br />
While QEMU is running, a monitor console is provided in order to provide several ways to interact with the virtual machine running. The QEMU monitor offers interesting capabilities such as obtaining information about the current virtual machine, hotplugging devices, creating snapshots of the current state of the virtual machine, etc. To see the list of all commands, run {{ic|help}} or {{ic|?}} in the QEMU monitor console or review the relevant section of the [https://www.qemu.org/docs/master/system/monitor.html official QEMU documentation].<br />
<br />
=== Accessing the monitor console ===<br />
<br />
==== Graphical view ====<br />
<br />
When using the {{ic|std}} default graphics option, one can access the QEMU monitor by pressing {{ic|Ctrl+Alt+2}} or by clicking ''View > compatmonitor0'' in the QEMU window. To return to the virtual machine graphical view either press {{ic|Ctrl+Alt+1}} or click ''View > VGA''.<br />
<br />
However, the standard method of accessing the monitor is not always convenient and does not work in all graphic outputs QEMU supports.<br />
<br />
==== Telnet ====<br />
<br />
To enable [[telnet]], run QEMU with the {{ic|-monitor telnet:127.0.0.1:''port'',server,nowait}} parameter. When the virtual machine is started you will be able to access the monitor via telnet:<br />
<br />
$ telnet 127.0.0.1 ''port''<br />
<br />
{{Note|If {{ic|127.0.0.1}} is specified as the IP to listen it will be only possible to connect to the monitor from the same host QEMU is running on. If connecting from remote hosts is desired, QEMU must be told to listen {{ic|0.0.0.0}} as follows: {{ic|-monitor telnet:0.0.0.0:''port'',server,nowait}}. Keep in mind that it is recommended to have a [[firewall]] configured in this case or make sure your local network is completely trustworthy since this connection is completely unauthenticated and unencrypted.}}<br />
<br />
==== UNIX socket ====<br />
<br />
Run QEMU with the {{ic|-monitor unix:''socketfile'',server,nowait}} parameter. Then you can connect with either {{pkg|socat}}, {{pkg|nmap}} or {{pkg|openbsd-netcat}}.<br />
<br />
For example, if QEMU is run via:<br />
<br />
$ qemu-system-x86_64 -monitor unix:/tmp/monitor.sock,server,nowait ''[...]''<br />
<br />
It is possible to connect to the monitor with:<br />
<br />
$ socat - UNIX-CONNECT:/tmp/monitor.sock<br />
<br />
Or with:<br />
<br />
$ nc -U /tmp/monitor.sock<br />
<br />
Alternatively with {{pkg|nmap}}:<br />
<br />
$ ncat -U /tmp/monitor.sock<br />
<br />
==== TCP ====<br />
<br />
You can expose the monitor over TCP with the argument {{ic|-monitor tcp:127.0.0.1:''port'',server,nowait}}. Then connect with netcat, either {{pkg|openbsd-netcat}} or {{pkg|gnu-netcat}} by running:<br />
<br />
$ nc 127.0.0.1 ''port''<br />
<br />
{{Note|In order to be able to connect to the tcp socket from other devices other than the same host QEMU is being run on you need to listen to {{ic|0.0.0.0}} like explained in the telnet case. The same security warnings apply in this case as well.}}<br />
<br />
==== Standard I/O ====<br />
<br />
It is possible to access the monitor automatically from the same terminal QEMU is being run by running it with the argument {{ic|-monitor stdio}}.<br />
<br />
=== Sending keyboard presses to the virtual machine using the monitor console ===<br />
<br />
Some combinations of keys may be difficult to perform on virtual machines due to the host intercepting them instead in some configurations (a notable example is the {{ic|Ctrl+Alt+F*}} key combinations, which change the active tty). To avoid this problem, the problematic combination of keys may be sent via the monitor console instead. Switch to the monitor and use the {{ic|sendkey}} command to forward the necessary keypresses to the virtual machine. For example:<br />
<br />
(qemu) sendkey ctrl-alt-f2<br />
<br />
=== Creating and managing snapshots via the monitor console ===<br />
<br />
{{Note|This feature will '''only''' work when the virtual machine disk image is in ''qcow2'' format. It will not work with ''raw'' images.}}<br />
<br />
It is sometimes desirable to save the current state of a virtual machine and having the possibility of reverting the state of the virtual machine to that of a previously saved snapshot at any time. The QEMU monitor console provides the user with the necessary utilities to create snapshots, manage them, and revert the machine state to a saved snapshot.<br />
<br />
* Use {{ic|savevm ''name''}} in order to create a snapshot with the tag ''name''.<br />
* Use {{ic|loadvm ''name''}} to revert the virtual machine to the state of the snapshot ''name''.<br />
* Use {{ic|delvm ''name''}} to delete the snapshot tagged as ''name''.<br />
* Use {{ic|info snapshots}} to see a list of saved snapshots. Snapshots are identified by both an auto-incremented ID number and a text tag (set by the user on snapshot creation).<br />
<br />
=== Running the virtual machine in immutable mode ===<br />
<br />
It is possible to run a virtual machine in a frozen state so that all changes will be discarded when the virtual machine is powered off just by running QEMU with the {{ic|-snapshot}} parameter. When the disk image is written by the guest, changes will be saved in a temporary file in {{ic|/tmp}} and will be discarded when QEMU halts.<br />
<br />
However, if a machine is running in frozen mode it is still possible to save the changes to the disk image if it is afterwards desired by using the monitor console and running the following command:<br />
<br />
(qemu) commit all<br />
<br />
If snapshots are created when running in frozen mode they will be discarded as soon as QEMU is exited unless changes are explicitly commited to disk, as well.<br />
<br />
=== Pause and power options via the monitor console ===<br />
<br />
Some operations of a physical machine can be emulated by QEMU using some monitor commands:<br />
<br />
* {{ic|system_powerdown}} will send an ACPI shutdown request to the virtual machine. This effect is similar to the power button in a physical machine.<br />
* {{ic|system_reset}} will reset the virtual machine similarly to a reset button in a physical machine. This operation can cause data loss and file system corruption since the virtual machine is not cleanly restarted.<br />
* {{ic|stop}} will pause the virtual machine.<br />
* {{ic|cont}} will resume a virtual machine previously paused.<br />
<br />
=== Taking screenshots of the virtual machine ===<br />
<br />
Screenshots of the virtual machine graphic display can be obtained in the PPM format by running the following command in the monitor console:<br />
<br />
(qemu) screendump ''file.ppm''<br />
<br />
== QEMU machine protocol ==<br />
<br />
The QEMU machine protocol (QMP) is a JSON-based protocol which allows applications to control a QEMU instance. Similarly to the [[#QEMU monitor]] it offers ways to interact with a running machine and the JSON protocol allows to do it programmatically. The description of all the QMP commands can be found in [https://raw.githubusercontent.com/coreos/qemu/master/qmp-commands.hx qmp-commands].<br />
<br />
=== Start QMP ===<br />
<br />
The usual way to control the guest using the QMP protocol, is to open a TCP socket when launching the machine using the {{ic|-qmp}} option. Here it is using for example the TCP port 4444:<br />
<br />
$ qemu-system-x86_64 ''[...]'' -qmp tcp:localhost:4444,server,nowait<br />
<br />
Then one way to communicate with the QMP agent is to use [[netcat]]:<br />
<br />
{{hc|nc localhost 4444|{"QMP": {"version": {"qemu": {"micro": 0, "minor": 1, "major": 3}, "package": ""}, "capabilities": []} } }}<br />
<br />
At this stage, the only command that can be recognized is {{ic|qmp_capabilities}}, so that QMP enters into command mode. Type:<br />
<br />
{"execute": "qmp_capabilities"}<br />
<br />
Now, QMP is ready to receive commands, to retrieve the list of recognized commands, use:<br />
<br />
{"execute": "query-commands"}<br />
<br />
=== Live merging of child image into parent image ===<br />
<br />
It is possible to merge a running snapshot into its parent by issuing a {{ic|block-commit}} command. In its simplest form the following line will commit the child into its parent:<br />
<br />
{"execute": "block-commit", "arguments": {"device": "''devicename''"}}<br />
<br />
Upon reception of this command, the handler looks for the base image and converts it from read only to read write mode and then runs the commit job.<br />
<br />
Once the ''block-commit'' operation has completed, the event {{ic|BLOCK_JOB_READY}} will be emitted, signalling that the synchronization has finished. The job can then be gracefully completed by issuing the command {{ic|block-job-complete}}:<br />
<br />
{"execute": "block-job-complete", "arguments": {"device": "''devicename''"}}<br />
<br />
Until such a command is issued, the ''commit'' operation remains active.<br />
After successful completion, the base image remains in read write mode and becomes the new active layer. On the other hand, the child image becomes invalid and it is the responsibility of the user to clean it up.<br />
<br />
{{Tip|The list of device and their names can be retrieved by executing the command {{ic|query-block}} and parsing the results. The device name is in the {{ic|device}} field, for example {{ic|ide0-hd0}} for the hard disk in this example: {{hc|{"execute": "query-block"}|{"return": [{"io-status": "ok", "device": "'''ide0-hd0'''", "locked": false, "removable": false, "inserted": {"iops_rd": 0, "detect_zeroes": "off", "image": {"backing-image": {"virtual-size": 27074281472, "filename": "parent.qcow2", ... } }} }}<br />
<br />
=== Live creation of a new snapshot ===<br />
<br />
To create a new snapshot out of a running image, run the command:<br />
<br />
{"execute": "blockdev-snapshot-sync", "arguments": {"device": "''devicename''","snapshot-file": "''new_snapshot_name''.qcow2"}}<br />
<br />
This creates an overlay file named {{ic|''new_snapshot_name''.qcow2}} which then becomes the new active layer.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Improve virtual machine performance ===<br />
<br />
There are a number of techniques that you can use to improve the performance of the virtual machine. For example:<br />
<br />
* Apply [[#Enabling KVM]] for full virtualization.<br />
* Use the {{ic|-cpu host}} option to make QEMU emulate the host's exact CPU rather than a more generic CPU.<br />
* Especially for Windows guests, enable [https://blog.wikichoon.com/2014/07/enabling-hyper-v-enlightenments-with-kvm.html Hyper-V enlightenments]: {{ic|1=-cpu host,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time}}.<br />
* If the host machine has multiple cores, assign the guest more cores using the {{ic|-smp}} option.<br />
* Make sure you have assigned the virtual machine enough memory. By default, QEMU only assigns 128 MiB of memory to each virtual machine. Use the {{ic|-m}} option to assign more memory. For example, {{ic|-m 1024}} runs a virtual machine with 1024 MiB of memory.<br />
* If supported by drivers in the guest operating system, use virtio for network and/or block devices, see [[#Installing virtio drivers]].<br />
* Use TAP devices instead of user-mode networking, see [[#Tap networking with QEMU]].<br />
* If the guest OS is doing heavy writing to its disk, you may benefit from certain mount options on the host's file system. For example, you can mount an [[Ext4|ext4 file system]] with the option {{ic|1=barrier=0}}. You should read the documentation for any options that you change because sometimes performance-enhancing options for file systems come at the cost of data integrity.<br />
* If you have a raw disk image, you may want to disable the cache:{{bc|1=$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio,'''cache=none'''}}<br />
* Use the native Linux AIO: {{bc|1=$ qemu-system-x86_64 -drive file=''disk_image'',if=virtio''',aio=native,cache.direct=on'''}}<br />
* If you are running multiple virtual machines concurrently that all have the same operating system installed, you can save memory by enabling [[wikipedia:Kernel_SamePage_Merging_(KSM)|kernel same-page merging]]. See [[#Enabling KSM]].<br />
* In some cases, memory can be reclaimed from running virtual machines by running a memory ballooning driver in the guest operating system and launching QEMU using {{ic|-device virtio-balloon}}.<br />
* It is possible to use a emulation layer for an ICH-9 AHCI controller (although it may be unstable). The AHCI emulation supports [[Wikipedia:Native_Command_Queuing|NCQ]], so multiple read or write requests can be outstanding at the same time: {{bc|1=$ qemu-system-x86_64 -drive id=disk,file=''disk_image'',if=none -device ich9-ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0}}<br />
<br />
See https://www.linux-kvm.org/page/Tuning_KVM for more information.<br />
<br />
=== Starting QEMU virtual machines on boot ===<br />
<br />
==== With libvirt ====<br />
<br />
If a virtual machine is set up with [[libvirt]], it can be configured with {{ic|virsh autostart}} or through the ''virt-manager'' GUI to start at host boot by going to the Boot Options for the virtual machine and selecting "Start virtual machine on host boot up".<br />
<br />
==== With systemd service ====<br />
<br />
To run QEMU VMs on boot, you can use following systemd unit and config.<br />
<br />
{{hc|/etc/systemd/system/qemu@.service|2=<br />
[Unit]<br />
Description=QEMU virtual machine<br />
<br />
[Service]<br />
Environment="haltcmd=kill -INT $MAINPID"<br />
EnvironmentFile=/etc/conf.d/qemu.d/%i<br />
ExecStart=/usr/bin/qemu-system-x86_64 -name %i -enable-kvm -m 512 -nographic $args<br />
ExecStop=/usr/bin/bash -c ${haltcmd}<br />
ExecStop=/usr/bin/bash -c 'while nc localhost 7100; do sleep 1; done'<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
{{Note|This service will wait for the console port to be released, which means that the VM has been shutdown, to graciously end.}}<br />
<br />
Then create per-VM configuration files, named {{ic|/etc/conf.d/qemu.d/''vm_name''}}, with the variables {{ic|args}} and {{ic|haltcmd}} set. Example configs:<br />
<br />
{{hc|/etc/conf.d/qemu.d/one|2=<br />
args="-hda /dev/vg0/vm1 -serial telnet:localhost:7000,server,nowait,nodelay \<br />
-monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"<br />
<br />
haltcmd="echo 'system_powerdown' {{!}} nc localhost 7100" # or netcat/ncat}}<br />
<br />
{{hc|/etc/conf.d/qemu.d/two|2=<br />
args="-hda /srv/kvm/vm2 -serial telnet:localhost:7001,server,nowait,nodelay -vnc :1"<br />
<br />
haltcmd="ssh powermanager@vm2 sudo poweroff"}}<br />
<br />
The description of the variables is the following:<br />
<br />
* {{ic|args}} - QEMU command line arguments to be used.<br />
* {{ic|haltcmd}} - Command to shut down a VM safely. In the first example, the QEMU monitor is exposed via telnet using {{ic|-monitor telnet:..}} and the VMs are powered off via ACPI by sending {{ic|system_powerdown}} to monitor with the {{ic|nc}} command. In the other example, SSH is used.<br />
<br />
To set which virtual machines will start on boot-up, [[enable]] the {{ic|qemu@''vm_name''.service}} systemd unit.<br />
<br />
=== Mouse integration ===<br />
<br />
To prevent the mouse from being grabbed when clicking on the guest operating system's window, add the options {{ic|-usb -device usb-tablet}}. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated. For example:<br />
<br />
$ qemu-system-x86_64 -hda ''disk_image'' -m 512 -usb -device usb-tablet<br />
<br />
If that does not work, try using {{ic|-vga qxl}} parameter, also look at the instructions [[#Mouse cursor is jittery or erratic]].<br />
<br />
=== Pass-through host USB device ===<br />
<br />
It is possible to access the physical device connected to a USB port of the host from the guest. The first step is to identify where the device is connected, this can be found running the {{ic|lsusb}} command. For example:<br />
<br />
{{hc|$ lsusb|<br />
...<br />
Bus '''003''' Device '''007''': ID '''0781''':'''5406''' SanDisk Corp. Cruzer Micro U3<br />
}}<br />
<br />
The outputs in bold above will be useful to identify respectively the ''host_bus'' and ''host_addr'' or the ''vendor_id'' and ''product_id''.<br />
<br />
In qemu, the idea is to emulate an EHCI (USB 2) or XHCI (USB 1.1 USB 2 USB 3) controller with the option {{ic|1=-device usb-ehci,id=ehci}} or {{ic|1=-device qemu-xhci,id=xhci}} respectively and then attach the physical device to it with the option {{ic|1=-device usb-host,..}}. We will consider that ''controller_id'' is either {{ic|ehci}} or {{ic|xhci}} for the rest of this section.<br />
<br />
Then, there are two ways to connect to the USB of the host with qemu:<br />
<br />
# Identify the device and connect to it on any bus and address it is attached to on the host, the generic syntax is: {{bc|1=-device usb-host,bus=''controller_id''.0,vendorid=0x''vendor_id'',productid=0x''product_id''}}Applied to the device used in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,vendorid=0x'''0781''',productid=0x'''5406'''}}One can also add the {{ic|1=...,port=''port_number''}} setting to the previous option to specify in which physical port of the virtual controller the device should be attached, useful in the case one wants to add multiple usb devices to the VM. Another option is to use the new {{ic|hostdevice}} property of {{ic|usb-host}} which is available since QEMU 5.1.0, the syntax is: {{bc|1=-device qemu-xhci,id=xhci -device usb-host,hostdevice=/dev/bus/usb/003/007}}<br />
# Attach whatever is connected to a given USB bus and address, the syntax is:{{bc|1=-device usb-host,bus=''controller_id''.0,hostbus=''host_bus'',host_addr=''host_addr''}}Applied to the bus and the address in the example above, it becomes:{{bc|1=-device usb-ehci,id=ehci -device usb-host,bus=ehci.0,hostbus='''3''',hostaddr='''7'''}}<br />
See [https://www.qemu.org/docs/master/system/devices/usb.html QEMU/USB emulation] for more information.<br />
{{Note|If you encounter permission errors when running QEMU, see [[udev#About udev rules]] for information on how to set permissions of the device.}}<br />
<br />
=== USB redirection with SPICE ===<br />
<br />
When using [[#SPICE]] it is possible to redirect USB devices from the client to the virtual machine without needing to specify them in the QEMU command. It is possible to configure the number of USB slots available for redirected devices (the number of slots will determine the maximum number of devices which can be redirected simultaneously). The main advantages of using SPICE for redirection compared to the previously-mentioned {{ic|-usbdevice}} method is the possibility of hot-swapping USB devices after the virtual machine has started, without needing to halt it in order to remove USB devices from the redirection or adding new ones. This method of USB redirection also allows us to redirect USB devices over the network, from the client to the server. In summary, it is the most flexible method of using USB devices in a QEMU virtual machine.<br />
<br />
We need to add one EHCI/UHCI controller per available USB redirection slot desired as well as one SPICE redirection channel per slot. For example, adding the following arguments to the QEMU command you use for starting the virtual machine in SPICE mode will start the virtual machine with three available USB slots for redirection:<br />
<br />
{{bc|1=<br />
-device ich9-usb-ehci1,id=usb \<br />
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,multifunction=on \<br />
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2 \<br />
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev1 -device usb-redir,chardev=usbredirchardev1,id=usbredirdev1 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev2 -device usb-redir,chardev=usbredirchardev2,id=usbredirdev2 \<br />
-chardev spicevmc,name=usbredir,id=usbredirchardev3 -device usb-redir,chardev=usbredirchardev3,id=usbredirdev3<br />
}}<br />
<br />
See [https://www.spice-space.org/usbredir.html SPICE/usbredir] for more information.<br />
<br />
Both {{ic|spicy}} from {{Pkg|spice-gtk}} (''Input > Select USB Devices for redirection'') and {{ic|remote-viewer}} from {{pkg|virt-viewer}} (''File > USB device selection'') support this feature. Please make sure that you have installed the necessary SPICE Guest Tools on the virtual machine for this functionality to work as expected (see the [[#SPICE]] section for more information).<br />
<br />
{{Warning|Keep in mind that when a USB device is redirected from the client, it will not be usable from the client operating system itself until the redirection is stopped. It is specially important to never redirect the input devices (namely mouse and keyboard), since it will be then difficult to access the SPICE client menus to revert the situation, because the client will not respond to the input devices after being redirected to the virtual machine.}}<br />
<br />
==== Automatic USB forwarding with udev ====<br />
<br />
Normally, forwarded devices must be available at VM boot time to be forwarded. If that device is disconnected, it will not be forwarded anymore.<br />
<br />
You can use [[udev rule]]s to automatically attach a device when it comes online. Create a {{ic|hostdev}} entry somewhere on disk. [[chown]] it to root to prevent other users modifying it.<br />
<br />
{{hc|/usr/local/hostdev-mydevice.xml|2=<br />
<hostdev mode='subsystem' type='usb'><br />
<source><br />
<vendor id='0x03f0'/><br />
<product id='0x4217'/><br />
</source><br />
</hostdev><br />
}}<br />
<br />
Then create a ''udev'' rule which will attach/detach the device:<br />
<br />
{{hc|/usr/lib/udev/rules.d/90-libvirt-mydevice|2=<br />
ACTION=="add", \<br />
SUBSYSTEM=="usb", \<br />
ENV{ID_VENDOR_ID}=="03f0", \<br />
ENV{ID_MODEL_ID}=="4217", \<br />
RUN+="/usr/bin/virsh attach-device GUESTNAME /usr/local/hostdev-mydevice.xml"<br />
ACTION=="remove", \<br />
SUBSYSTEM=="usb", \<br />
ENV{ID_VENDOR_ID}=="03f0", \<br />
ENV{ID_MODEL_ID}=="4217", \<br />
RUN+="/usr/bin/virsh detach-device GUESTNAME /usr/local/hostdev-mydevice.xml"<br />
}}<br />
<br />
[https://rolandtapken.de/blog/2011-04/how-auto-hotplug-usb-devices-libvirt-vms-update-1 Source and further reading].<br />
<br />
=== Enabling KSM ===<br />
<br />
Kernel Samepage Merging (KSM) is a feature of the Linux kernel that allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. The KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.<br />
<br />
{{Note|Although KSM may reduce memory usage, it may increase CPU usage. Also note some security issues may occur, see [[Wikipedia:Kernel same-page merging]].}}<br />
<br />
To enable KSM:<br />
<br />
# echo 1 > /sys/kernel/mm/ksm/run<br />
<br />
To make it permanent, use [[systemd#systemd-tmpfiles - temporary files|systemd's temporary files]]:<br />
<br />
{{hc|/etc/tmpfiles.d/ksm.conf|<br />
w /sys/kernel/mm/ksm/run - - - - 1<br />
}}<br />
<br />
If KSM is running, and there are pages to be merged (i.e. at least two similar VMs are running), then {{ic|/sys/kernel/mm/ksm/pages_shared}} should be non-zero. See https://docs.kernel.org/admin-guide/mm/ksm.html for more information.<br />
<br />
{{Tip|An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory:<br />
<br />
$ grep -r . /sys/kernel/mm/ksm/<br />
<br />
}}<br />
<br />
=== Multi-monitor support ===<br />
<br />
The Linux QXL driver supports four heads (virtual screens) by default. This can be changed via the {{ic|1=qxl.heads=N}} kernel parameter.<br />
<br />
The default VGA memory size for QXL devices is 16M (VRAM size is 64M). This is not sufficient if you would like to enable two 1920x1200 monitors since that requires 2 × 1920 × 4 (color depth) × 1200 = 17.6 MiB VGA memory. This can be changed by replacing {{ic|-vga qxl}} by {{ic|<nowiki>-vga none -device qxl-vga,vgamem_mb=32</nowiki>}}. If you ever increase vgamem_mb beyond 64M, then you also have to increase the {{ic|vram_size_mb}} option.<br />
<br />
=== Custom display resolution ===<br />
<br />
A custom display resolution can be set with {{ic|1=-device VGA,edid=on,xres=1280,yres=720}} (see [[wikipedia:Extended_Display_Identification_Data|EDID]] and [[wikipedia:Display_resolution|display resolution]]).<br />
<br />
=== Copy and paste ===<br />
<br />
One way to share the clipboard between the host and the guest is to enable the SPICE remote desktop protocol and access the client with a SPICE client.<br />
One needs to follow the steps described in [[#SPICE]]. A guest run this way will support copy paste with the host.<br />
<br />
=== Windows-specific notes ===<br />
<br />
QEMU can run any version of Windows from Windows 95 through Windows 11.<br />
<br />
It is possible to run [[Windows PE]] in QEMU.<br />
<br />
==== Fast startup ====<br />
<br />
{{Note|An administrator account is required to change power settings.}}<br />
<br />
For Windows 8 (or later) guests it is better to disable "Turn on fast startup (recommended)" from the Power Options of the Control Panel as explained in the following [https://www.tenforums.com/tutorials/4189-turn-off-fast-startup-windows-10-a.html forum page], as it causes the guest to hang during every other boot.<br />
<br />
Fast Startup may also need to be disabled for changes to the {{ic|-smp}} option to be properly applied.<br />
<br />
==== Remote Desktop Protocol ====<br />
<br />
If you use a MS Windows guest, you might want to use RDP to connect to your guest VM. If you are using a VLAN or are not in the same network as the guest, use:<br />
<br />
$ qemu-system-x86_64 -nographic -nic user,hostfwd=tcp::5555-:3389<br />
<br />
Then connect with either {{Pkg|rdesktop}} or {{Pkg|freerdp}} to the guest. For example:<br />
<br />
$ xfreerdp -g 2048x1152 localhost:5555 -z -x lan<br />
<br />
=== Clone Linux system installed on physical equipment ===<br />
<br />
Linux system installed on physical equipment can be cloned for running on QEMU vm. See [https://coffeebirthday.wordpress.com/2018/09/14/clone-linux-system-for-qemu-virtual-machine/ Clone Linux system from hardware for QEMU virtual machine]<br />
<br />
=== Chrooting into arm/arm64 environment from x86_64 ===<br />
<br />
Sometimes it is easier to work directly on a disk image instead of the real ARM based device. This can be achieved by mounting an SD card/storage containing the ''root'' partition and chrooting into it.<br />
<br />
Another use case for an ARM chroot is building ARM packages on an x86_64 machine. Here, the chroot environment can be created from an image tarball from [https://archlinuxarm.org Arch Linux ARM] - see [https://nerdstuff.org/posts/2020/2020-003_simplest_way_to_create_an_arm_chroot/] for a detailed description of this approach.<br />
<br />
Either way, from the chroot it should be possible to run ''pacman'' and install more packages, compile large libraries etc. Since the executables are for the ARM architecture, the translation to x86 needs to be performed by [[QEMU]].<br />
<br />
Install {{Pkg|qemu-user-static}} on the x86_64 machine/host, and {{Pkg|qemu-user-static-binfmt}} to register the qemu binaries to binfmt service.<br />
<br />
''qemu-user-static'' is used to allow the execution of compiled programs from other architectures. This is similar to what is provided by {{Pkg|qemu-emulators-full}}, but the "static" variant is required for chroot. Examples:<br />
<br />
qemu-arm-static path_to_sdcard/usr/bin/ls<br />
qemu-aarch64-static path_to_sdcard/usr/bin/ls<br />
<br />
These two lines execute the {{ic|ls}} command compiled for 32-bit ARM and 64-bit ARM respectively. Note that this will not work without chrooting, because it will look for libraries not present in the host system.<br />
<br />
{{Pkg|qemu-user-static}} allows automatically prefixing the ARM exectuable with {{ic|qemu-arm-static}} or {{ic|qemu-aarch64-static}}.<br />
<br />
Make sure that the ARM executable support is active:<br />
<br />
{{hc|$ ls /proc/sys/fs/binfmt_misc|<br />
qemu-aarch64 qemu-arm qemu-cris qemu-microblaze qemu-mipsel qemu-ppc64 qemu-riscv64 qemu-sh4 qemu-sparc qemu-sparc64 status<br />
qemu-alpha qemu-armeb qemu-m68k qemu-mips qemu-ppc qemu-ppc64abi32 qemu-s390x qemu-sh4eb qemu-sparc32plus register<br />
}}<br />
<br />
Each executable must be listed.<br />
<br />
If it is not active, [[restart]] {{ic|systemd-binfmt.service}}.<br />
<br />
Mount the SD card to {{ic|/mnt/sdcard}} (the device name may be different).<br />
<br />
# mount --mkdir /dev/mmcblk0p2 /mnt/sdcard<br />
<br />
Mount boot partition if needed (again, use the suitable device name):<br />
<br />
# mount /dev/mmcblk0p1 /mnt/sdcard/boot<br />
<br />
Finally ''chroot'' into the SD card root as described in [[Change root#Using chroot]]:<br />
<br />
# chroot /mnt/sdcard /bin/bash<br />
<br />
Alternatively, you can use ''arch-chroot'' from {{Pkg|arch-install-scripts}}, as it will provide an easier way to get network support:<br />
<br />
# arch-chroot /mnt/sdcard /bin/bash<br />
<br />
You can also use [[systemd-nspawn]] to chroot into the ARM environment:<br />
<br />
# systemd-nspawn -D /mnt/sdcard -M myARMMachine --bind-ro=/etc/resolv.conf<br />
<br />
{{ic|1=--bind-ro=/etc/resolv.conf}} is optional and gives a working network DNS inside the chroot<br />
<br />
=== Not grabbing mouse input ===<br />
<br />
{{Style|It is not explained what the option actually does. Is it causing or avoiding the side effect?}}<br />
<br />
Tablet mode has side effect of not grabbing mouse input in QEMU window:<br />
<br />
-usb -device usb-tablet<br />
<br />
It works with several {{ic|-vga}} backends one of which is virtio.<br />
<br />
== Troubleshooting ==<br />
<br />
{{Merge|QEMU/Troubleshooting|This section is long enough to be split into a dedicated subpage.}}<br />
<br />
=== Mouse cursor is jittery or erratic ===<br />
<br />
If the cursor jumps around the screen uncontrollably, entering this on the terminal before starting QEMU might help:<br />
<br />
$ export SDL_VIDEO_X11_DGAMOUSE=0<br />
<br />
If this helps, you can add this to your {{ic|~/.bashrc}} file.<br />
<br />
=== No visible Cursor ===<br />
<br />
Add {{ic|1=-display default,show-cursor=on}} to QEMU's options to see a mouse cursor.<br />
<br />
If that still does not work, make sure you have set your display device appropriately, for example: {{ic|-vga qxl}}.<br />
<br />
Another option to try is {{ic|-usb -device usb-tablet}} as mentioned in [[#Mouse integration]]. This overrides the default PS/2 mouse emulation and synchronizes pointer location between host and guest as an added bonus.<br />
<br />
=== Two different mouse cursors are visible ===<br />
<br />
Apply the tip [[#Mouse integration]].<br />
<br />
=== Keyboard issues when using VNC ===<br />
<br />
When using VNC, you might experience keyboard problems described (in gory details) [https://www.berrange.com/posts/2010/07/04/more-than-you-or-i-ever-wanted-to-know-about-virtual-keyboard-handling/ here]. The solution is ''not'' to use the {{ic|-k}} option on QEMU, and to use {{ic|gvncviewer}} from {{Pkg|gtk-vnc}}. See also [https://www.mail-archive.com/libvir-list@redhat.com/msg13340.html this] message posted on libvirt's mailing list.<br />
<br />
=== Keyboard seems broken or the arrow keys do not work ===<br />
<br />
Should you find that some of your keys do not work or "press" the wrong key (in particular, the arrow keys), you likely need to specify your keyboard layout as an option. The keyboard layouts can be found in {{ic|/usr/share/qemu/keymaps/}}.<br />
<br />
$ qemu-system-x86_64 -k ''keymap'' ''disk_image''<br />
<br />
=== Could not read keymap file ===<br />
<br />
qemu-system-x86_64: -display vnc=0.0.0.0:0: could not read keymap file: 'en'<br />
<br />
is caused by an invalid ''keymap'' passed to the {{ic|-k}} argument. For example, {{ic|en}} is invalid, but {{ic|en-us}} is valid - see {{ic|/usr/share/qemu/keymaps/}}.<br />
<br />
=== Guest display stretches on window resize ===<br />
<br />
To restore default window size, press {{ic|Ctrl+Alt+u}}.<br />
<br />
=== ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy ===<br />
<br />
If an error message like this is printed when starting QEMU with {{ic|-enable-kvm}} option:<br />
<br />
ioctl(KVM_CREATE_VM) failed: 16 Device or resource busy<br />
failed to initialize KVM: Device or resource busy<br />
<br />
that means another [[hypervisor]] is currently running. It is not recommended or possible to run several hypervisors in parallel.<br />
<br />
=== libgfapi error message ===<br />
<br />
The error message displayed at startup:<br />
<br />
Failed to open module: libgfapi.so.0: cannot open shared object file: No such file or directory<br />
<br />
[[Install]] {{pkg|glusterfs}} or ignore the error message as GlusterFS is a optional dependency.<br />
<br />
=== Kernel panic on LIVE-environments ===<br />
<br />
If you start a live-environment (or better: booting a system) you may encounter this:<br />
<br />
[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown block(0,0)<br />
<br />
or some other boot hindering process (e.g. cannot unpack initramfs, cant start service foo).<br />
Try starting the VM with the {{ic|-m VALUE}} switch and an appropriate amount of RAM, if the ram is to low you will probably encounter similar issues as above/without the memory-switch.<br />
<br />
=== Windows 7 guest suffers low-quality sound ===<br />
<br />
Using the {{ic|hda}} audio driver for Windows 7 guest may result in low-quality sound. Changing the audio driver to {{ic|ac97}} by passing the {{ic|-soundhw ac97}} arguments to QEMU and installing the AC97 driver from [https://www.realtek.com/en/component/zoo/category/pc-audio-codecs-ac-97-audio-codecs-software Realtek AC'97 Audio Codecs] in the guest may solve the problem. See [https://bugzilla.redhat.com/show_bug.cgi?id=1176761#c16 Red Hat Bugzilla – Bug 1176761] for more information.<br />
<br />
=== Could not access KVM kernel module: Permission denied ===<br />
<br />
If you encounter the following error:<br />
<br />
libvirtError: internal error: process exited while connecting to monitor: Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied<br />
<br />
Systemd 234 assigns a dynamic ID for the {{ic|kvm}} group (see {{Bug|54943}}). To avoid this error, you need edit the file {{ic|/etc/libvirt/qemu.conf}} and change the line with {{ic|1=group = "78"}} to {{ic|1=group = "kvm"}}.<br />
<br />
=== "System Thread Exception Not Handled" when booting a Windows VM ===<br />
<br />
Windows 8 or Windows 10 guests may raise a generic compatibility exception at boot, namely "System Thread Exception Not Handled", which tends to be caused by legacy drivers acting strangely on real machines. On KVM machines this issue can generally be solved by setting the CPU model to {{ic|core2duo}}.<br />
<br />
=== Certain Windows games/applications crashing/causing a bluescreen ===<br />
<br />
Occasionally, applications running in the VM may crash unexpectedly, whereas they would run normally on a physical machine. If, while running {{ic|dmesg -wH}} as root, you encounter an error mentioning {{ic|MSR}}, the reason for those crashes is that KVM injects a [[wikipedia:General protection fault|General protection fault]] (GPF) when the guest tries to access unsupported [[wikipedia:Model-specific register|Model-specific registers]] (MSRs) - this often results in guest applications/OS crashing. A number of those issues can be solved by passing the {{ic|1=ignore_msrs=1}} option to the KVM module, which will ignore unimplemented MSRs.<br />
<br />
{{hc|/etc/modprobe.d/kvm.conf|2=<br />
...<br />
options kvm ignore_msrs=1<br />
...<br />
}}<br />
<br />
Cases where adding this option might help:<br />
<br />
* GeForce Experience complaining about an unsupported CPU being present.<br />
* StarCraft 2 and L.A. Noire reliably blue-screening Windows 10 with {{ic|KMODE_EXCEPTION_NOT_HANDLED}}. The blue screen information does not identify a driver file in these cases.<br />
<br />
{{Warning|While this is normally safe and some applications might not work without this, silently ignoring unknown MSR accesses could potentially break other software within the VM or other VMs.}}<br />
<br />
=== Applications in the VM experience long delays or take a long time to start ===<br />
<br />
This may be caused by insufficient available entropy in the VM. Consider allowing the guest to access the hosts's entropy pool by adding a [https://wiki.qemu.org/Features/VirtIORNG VirtIO RNG device] to the VM, or by installing an entropy generating daemon such as [[Haveged]].<br />
<br />
Anecdotally, OpenSSH takes a while to start accepting connections under insufficient entropy, without the logs revealing why.<br />
<br />
=== High interrupt latency and microstuttering ===<br />
<br />
This problem manifests itself as small pauses (stutters) and is particularly noticeable in graphics-intensive applications, such as games.<br />
<br />
* One of the causes is CPU power saving features, which are controlled by [[CPU frequency scaling]]. Change this to {{ic|performance}} for all processor cores. <br />
* Another possible cause is PS/2 inputs. Switch from PS/2 to Virtio inputs, see [[PCI passthrough via OVMF#Passing keyboard/mouse via Evdev]].<br />
<br />
=== QXL video causes low resolution ===<br />
<br />
QEMU 4.1.0 introduced a regression where QXL video can fall back to low resolutions, when being displayed through spice. [https://bugs.launchpad.net/qemu/+bug/1843151] For example, when KMS starts, text resolution may become as low as 4x10 characters. When trying to increase GUI resolution, it may go to the lowest supported resolution.<br />
<br />
As a workaround, create your device in this form:<br />
<br />
-device qxl-vga,max_outputs=1...<br />
<br />
=== VM does not boot when using a Secure Boot enabled OVMF ===<br />
<br />
{{ic|/usr/share/edk2-ovmf/x64/OVMF_CODE.secboot.fd}} from {{Pkg|edk2-ovmf}} is built with [[Wikipedia:System Management Mode|SMM]] support. If S3 support is not disabled in the VM, then the VM might not boot at all.<br />
<br />
Add the {{ic|1=-global ICH9-LPC.disable_s3=1}} option to the ''qemu'' command.<br />
<br />
See {{Bug|59465}} and https://github.com/tianocore/edk2/blob/master/OvmfPkg/README for more details and the required options to use Secure Boot in QEMU.<br />
<br />
=== VM does not boot into Arch ISO ===<br />
<br />
When trying to boot the VM for the first time from an Arch ISO image, the boot process hangs. Adding {{ic|1=console=ttyS0}} to kernel boot options by pressing {{ic|e}} in the boot menu you will get more boot messages and the following error:<br />
<br />
:: Mounting '/dev/disk/by-label/ARCH_202204' to '/run/archiso/bootmnt'<br />
Waiting 30 seconds for device /dev/disk/by-label/ARCH_202204 ...<br />
ERROR: '/dev/disk/by-label/ARCH_202204' device did not show up after 30 seconds...<br />
Falling back to interactive prompt<br />
You can try to fix the problem manually, log out when you are finished<br />
sh: can't access tty; job control turned off<br />
<br />
The error message does not give a good clue as to what the real issue is. The problem is with the default 128MB of RAM that QEMU allocates to the VM. Increasing the limit to 1024MB with {{ic|-m 1024}} solves the issue and lets the system boot. You can continue installing Arch Linux as usual after that. Once the installation is complete, the memory allocation for the VM can be decreased. The need for 1024MB is due to RAM disk requirements and size of the installation media. See [https://lists.archlinux.org/archives/list/arch-releng@lists.archlinux.org/message/D5HSGOFTPGYI6IZUEB3ZNAX4D3F3ID37/ this message on the arch-releng mailing list] and [https://bbs.archlinux.org/viewtopic.php?id=204023 this forum thread].<br />
<br />
=== Guest CPU interrupts are not firing ===<br />
<br />
If you are writing your own operating system by following the [https://wiki.osdev.org/ OSDev wiki], or are simply getting stepping through the guest architecture assembly code using QEMU's {{ic|gdb}} interface using the {{ic|-s}} flag, it is useful to know that many emulators, QEMU included, usually implement some CPU interrupts leaving many hardware interrupts unimplemented. One way to know if your code if firing an interrupt, is by using:<br />
<br />
-d int<br />
<br />
to enable showing interrupts/exceptions on stdout.<br />
<br />
To see what other guest debugging features QEMU has to offer, see:<br />
<br />
qemu-system-x86_64 -d help<br />
<br />
or replace {{ic|x86_64}} for your chosen guest architecture.<br />
<br />
== See also ==<br />
<br />
* [https://qemu.org Official QEMU website]<br />
* [https://www.linux-kvm.org Official KVM website]<br />
* [https://qemu.weilnetz.de/doc/6.0/ QEMU Emulator User Documentation]<br />
* [[Wikibooks:QEMU|QEMU Wikibook]]<br />
* [http://alien.slackbook.org/dokuwiki/doku.php?id=slackware:qemu Hardware virtualization with QEMU] by AlienBOB (last updated in 2008)<br />
* [http://blog.falconindy.com/articles/build-a-virtual-army.html Building a Virtual Army] by Falconindy<br />
* [https://git.qemu.org/?p=qemu.git;a=tree;f=docs Lastest docs]<br />
* [https://qemu.weilnetz.de/ QEMU on Windows]<br />
* [[wikipedia:Qemu|Wikipedia]]<br />
* [[debian:QEMU|Debian Wiki - QEMU]]<br />
* [https://people.gnome.org/~markmc/qemu-networking.html QEMU Networking on gnome.org]{{Dead link|2022|09|22|status=404}}<br />
* [http://bsdwiki.reedmedia.net/wiki/networking_qemu_virtual_bsd_systems.html Networking QEMU Virtual BSD Systems]<br />
* [https://www.gnu.org/software/hurd/hurd/running/qemu.html QEMU on gnu.org]<br />
* [https://wiki.freebsd.org/qemu QEMU on FreeBSD as host]<br />
* [https://wiki.mikejung.biz/KVM_/_Xen KVM/QEMU Virtio Tuning and SSD VM Optimization Guide]{{Dead link|2022|09|22|status=404}}<br />
* [https://doc.opensuse.org/documentation/leap/virtualization/html/book-virt/part-virt-qemu.html Managing Virtual Machines with QEMU - openSUSE documentation]<br />
* [https://www.ibm.com/support/knowledgecenter/en/linuxonibm/liaat/liaatkvm.htm KVM on IBM Knowledge Center]</div>Recolichttps://wiki.archlinux.org/index.php?title=TigerVNC&diff=739618TigerVNC2022-07-31T15:05:04Z<p>Recolic: Add new entry for trouble-shooting tip: If you just install 'lxqt' and 'vncserver' on fresh archlinux, it will not work!</p>
<hr />
<div>[[Category:Remote desktop]]<br />
[[Category:Servers]]<br />
[[de:VNC]]<br />
[[ja:TigerVNC]]<br />
[[zh-hans:TigerVNC]]<br />
{{Related articles start}}<br />
{{Related|x11vnc}}<br />
{{Related|TurboVNC}}<br />
{{Related articles end}}<br />
<br />
[https://tigervnc.org/ TigerVNC] is an implementation of the [[Wikipedia:Virtual Network Computing|Virtual Network Computing]] (VNC) protocol. This article focuses on the server functionality.<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|tigervnc}} package.<br />
<br />
== Running vncserver for virtual (headless) sessions ==<br />
<br />
=== Initial setup ===<br />
<br />
{{Note|Linux systems can have as many VNC servers as memory allows, all of which will be running in parallel to each other.}}<br />
<br />
For a quick start, see the steps below. Users are encouraged to read {{man|8|vncserver}} for the complete list of configuration options.<br />
<br />
# Create a password using {{ic|vncpasswd}} which will store the hashed password in {{ic|~/.vnc/passwd}}.<br />
# Edit {{ic|/etc/tigervnc/vncserver.users}} to define user mappings. Each user defined in this file will have a corresponding port on which its session will run. The number in the file corresponds to a TCP port. By default, :1 is TCP port 5901 (5900+1). If another parallel server is needed, a second instance can then run on the next highest, free port, i.e 5902 (5900+2).<br />
# Create {{ic|~/.vnc/config}} and at a minimum, define the type of session desired with a line like {{ic|1=session=foo}} where foo corresponds to whichever [[desktop environment]] is to run. One can see which desktop environments are available on the system by seeing their corresponding ''.desktop'' files within {{ic|/usr/share/xsessions/}}. For example:<br />
<br />
{{hc|~/.vnc/config|2=<br />
session=lxqt<br />
geometry=1920x1080<br />
localhost<br />
alwaysshared<br />
}}<br />
<br />
=== Starting and stopping tigervnc ===<br />
<br />
[[Start]] an instance of the {{ic|vncserver@.service}} template and optionally [[enable]] it to run at boot time/shutdown. Note that the ''instance identifier'' in this case is the display number (e.g. instance {{ic|vncserver@:1.service}} for display number {{ic|:1}}).<br />
<br />
{{Note|Direct calls to {{ic|/usr/bin/vncserver}} are not supported as they will not establish a proper session scope. The ''systemd'' service is the only supported method of using TigerVNC. See: [https://github.com/TigerVNC/tigervnc/issues/1096 Issue #1096].}}<br />
<br />
== Expose the local display directly ==<br />
<br />
Tigervnc comes with libvnc.so which can be directly loaded during X initialization which provides better performance.<br />
Create a following file and restart X:<br />
{{hc|/etc/X11/xorg.conf.d/10-vnc.conf|<br />
Section "Module"<br />
Load "vnc"<br />
EndSection<br />
<br />
Section "Screen"<br />
Identifier "Screen0"<br />
Option "UserPasswdVerifier" "VncAuth"<br />
Option "PasswordFile" "/root/.vnc/passwd"<br />
EndSection}}<br />
<br />
== Running x0vncserver to directly control the local display ==<br />
<br />
{{Pkg|tigervnc}} also provides {{man|1|x0vncserver}} which allows direct control over a physical X session. After defining a session password using the ''vncpasswd'' tool, invoke the server like so:<br />
<br />
$ x0vncserver -rfbauth ~/.vnc/passwd<br />
<br />
{{Note|<br />
* [[x11vnc]] is an alternative VNC server which can also provide direct control of the current X session.<br />
* {{ic|x0vncserver}} does not currently support clipboard sharing between the client and the server (even with the help of {{ic|autocutsel}}). See: [https://github.com/TigerVNC/tigervnc/issues/529 Issue #529].}}<br />
<br />
=== Starting x0vncserver via xprofile ===<br />
<br />
A simple way to start ''x0vncserver'' is adding a line in one of the [[xprofile]] files such as:<br />
<br />
{{hc|~/.xprofile|<br />
...<br />
x0vncserver -rfbauth ~/.vnc/passwd &<br />
}}<br />
<br />
=== Starting and stopping x0vncserver via systemd ===<br />
<br />
In order to have a VNC Server running ''x0vncserver'', which is the easiest way for most users to quickly have remote access to the current desktop, create a ''systemd'' unit as follows replacing the user and the options with the desired ones:<br />
<br />
{{hc|~/.config/systemd/user/x0vncserver.service|2=<br />
[Unit]<br />
Description=Remote desktop service (VNC)<br />
<br />
[Service]<br />
Type=simple<br />
# wait for Xorg started by ${USER}<br />
ExecStartPre=/bin/sh -c 'while ! pgrep -U "$USER" Xorg; do sleep 2; done'<br />
ExecStart=/usr/bin/x0vncserver -rfbauth %h/.vnc/passwd<br />
# or login with your username & password<br />
#ExecStart=/usr/bin/x0vncserver -PAMService=login -PlainUsers=${USER} -SecurityTypes=TLSPlain<br />
<br />
[Install]<br />
WantedBy=default.target}}<br />
<br />
[[Start/enable]] the {{ic|x0vncserver.service}} [[user unit]].<br />
<br />
== Running Xvnc with XDMCP for on demand sessions ==<br />
<br />
One can use ''systemd'' socket activation in combination with [[XDMCP]] to automatically spawn VNC servers for each user who attempts to login, so there is no need to set up one server/port per user. This setup uses the display manager to authenticate users and login, so there is no need for VNC passwords. The downside is that users cannot leave a session running on the server and reconnect to it later.<br />
<br />
To get this running, first set up [[XDMCP]] and make sure the display manager is running.<br />
Then create:<br />
{{hc|/etc/systemd/system/xvnc.socket|2=<br />
[Unit]<br />
Description=XVNC Server<br />
<br />
[Socket]<br />
ListenStream=5900<br />
Accept=yes<br />
<br />
[Install]<br />
WantedBy=sockets.target}}<br />
{{hc|/etc/systemd/system/xvnc@.service|2=<br />
[Unit]<br />
Description=XVNC Per-Connection Daemon<br />
<br />
[Service]<br />
ExecStart=-/usr/bin/Xvnc -inetd -query localhost -geometry 1920x1080 -once -SecurityTypes=None<br />
User=nobody<br />
StandardInput=socket<br />
StandardError=syslog}}<br />
Use systemctl to [[start]] and [[enable]] {{ic|xvnc.socket}}. Now any number of users can get unique desktops by connecting to port 5900.<br />
<br />
If the VNC server is exposed to the internet, add the {{ic|-localhost}} option to {{ic|Xvnc}} in {{ic|xvnc@.service}} (note that {{ic|-query localhost}} and {{ic|-localhost}} are different switches) and follow [[#Accessing vncserver via SSH tunnels]]. Since we only select a user after connecting, the VNC server runs as user ''nobody'' and uses {{ic|Xvnc}} directly instead of the {{ic|vncserver}} script, so any options in {{ic|~/.vnc}} are ignored. Optionally, [[autostart]] ''vncconfig'' so that the clipboard works (''vncconfig'' exits immediately in non-VNC sessions). One way is to create:<br />
{{hc|/etc/X11/xinit/xinitrc.d/99-vncconfig.sh|<br />
#!/bin/sh<br />
vncconfig -nowin &}}<br />
<br />
== Connecting to vncserver ==<br />
<br />
{{Warning|The default's TigerVNC security method is not secure, it lacks identity verification and will not prevent man-in-the-middle attack during the connection setup. Make sure you understand the security settings of your server and do not connect insecurely to a vncserver outside of a trusted LAN.}}<br />
<br />
{{Note|By default, TigerVNC uses the ''TLSVnc'' authentication/encryption method unless specifically instructed via the {{ic|SecurityTypes}} parameter. With ''TLSVnc'', there is standard VNC authentication and traffic is encrypted with GNUTLS but the identity of the server is not verified. TigerVNC supports alternative security schemes such as ''X509Vnc'' that combines standard VNC authentication with GNUTLS encryption and server identification, this is the recommended mode for a secure connection.<br />
<br />
When {{ic|SecurityTypes}} on the server is set to a non-encrypted option as high-priority (such as ''None'', ''VncAuth'', ''Plain'', ''TLSNone'', ''TLSPlain'', ''X509None'', ''X509Plain''); which is ill-advised, then it is not possible to use encryption. When running ''vncviewer'', it is safer to explicitly set {{ic|SecurityTypes}} and not accept any unencrypted traffic. Any other mode is to be used only when [[#Accessing vncserver via SSH tunnels]]. }}<br />
<br />
Any number of clients can connect to a vncserver. A simple example is given below where vncserver is running on 10.1.10.2 port 5901, or :1 in shorthand notation:<br />
$ vncviewer 10.1.10.2:1<br />
<br />
=== Passwordless authentication ===<br />
<br />
The {{ic|-passwd}} switch allows one to define the location of the server's {{ic|~/.vnc/passwd}} file. It is expected that the user has access to this file on the server through [[SSH]] or through physical access. In either case, place that file on the client's file system in a safe location, i.e. one that has read access ONLY to the expected user.<br />
<br />
$ vncviewer -passwd ''/path/to/server-passwd-file''<br />
<br />
The password can also be provided directly.<br />
<br />
{{Note|The password below is not secured; anyone who can run {{ic|ps}} on the machine will see it.}}<br />
<br />
$ vncviewer -passwd <(echo MYPASSWORD | vncpasswd -f)<br />
<br />
=== Example GUI-based clients ===<br />
<br />
* {{Pkg|gtk-vnc}}<br />
* {{Pkg|krdc}}<br />
* {{Pkg|vinagre}}<br />
* [[remmina]]<br />
* {{Pkg|virt-viewer}}<br />
* {{AUR|vncviewer-jar}}<br />
<br />
TigerVNC's vncviewer also has a simple GUI when run without any parameters:<br />
$ vncviewer<br />
<br />
== Accessing vncserver via SSH tunnels ==<br />
<br />
For servers offering SSH connection, an advantage of this method is that it is not necessary to open any other port than the already opened SSH port to the outside, since the VNC traffic is tunneled through the SSH port.<br />
<br />
=== On the server ===<br />
<br />
On the server side, ''vncserver'' or ''x0vncserver'' must be run.<br />
<br />
When running either one of these, it is recommended to use the {{ic|localhost}} option in {{ic|~/.vnc/config}} or the {{ic|-localhost}} switch (for ''x0vncserver'') since it allows connections from the localhost only and by analogy, only from users ssh'ed and authenticated on the box. For example:<br />
<br />
{{hc|~/.vnc/config|2=<br />
session=lxqt<br />
geometry=1920x1080<br />
localhost<br />
alwaysshared}}<br />
<br />
Make sure to [[Start]] or [[Restart]] the {{ic|vncserver@.service}}, for example (see also [[#Initial setup]]):<br />
# systemctl start vncserver@:1<br />
<br />
or for ''x0vncserver'':<br />
$ x0vncserver '''-localhost''' -SecurityTypes none<br />
<br />
=== On the client ===<br />
<br />
The VNC server has been setup on the remote machine to only accept local connections.<br />
Now, the client must open a secure shell with the remote machine (10.1.10.2 in this example) and create a tunnel from the client port, for instance 9901, to the remote server 5901 port. For more details on this feature, see [[OpenSSH#Forwarding other ports]] and {{man|1|ssh}}.<br />
<br />
$ ssh 10.1.10.2 -L 9901:localhost:5901<br />
<br />
Once connected via SSH, leave this shell window open since it is acting as the secured tunnel with the server. Alternatively, directly run SSH in the background using the {{ic|-f}} option. On the client side, to connect via this encrypted tunnel, point the ''vncviewer'' to the forwarded client port on the localhost.<br />
<br />
$ vncviewer localhost:9901<br />
<br />
What happens in practice is that the vncviewer connects locally to port 9901 which is tunneled to the server's localhost port 5901. The connection is established to the right port within the secure shell.<br />
<br />
{{Tip|It is possible, with a one-liner, to keep the port forwarding active during the connection and close it right after:<br />
{{bc|$ ssh -fL 9901:localhost:5901 10.1.10.2 sleep 10; vncviewer localhost:9901}}<br />
What it does is that the {{ic|-f}} switch will make ssh go in the background, it will still be alive executing {{ic|sleep 10}}. vncviewer is then executed and ssh remains open in the background as long as vncviewer makes use of the tunnel. ssh will close once the tunnel is dropped which is the wanted behavior.<br />
<br />
Alternatively, vncviewer's {{ic|-via}} switch provides a shortcut for the above command:<br />
{{bc|$ vncviewer -via 10.1.10.2 localhost::5901}}<br />
(Notice the double colon – vncviewer's syntax is {{ic|[host]:[display#]}} or {{ic|[host]::[port]}}.)<br />
}}<br />
<br />
=== Connecting to a vncserver from Android devices over SSH ===<br />
<br />
To connect to a VNC server over SSH using an Android device as a client, consider having the following setup:<br />
# SSH running on the server<br />
# vncserver running on server (with {{ic|-localhost}} flag for security)<br />
# SSH client on the Android device: ''ConnectBot'' is a popular choice and will be used in this guide as an example<br />
# VNC client on the Android device: ''androidVNC'' used here<br />
<br />
In ''ConnectBot'', connect to the desired machine. Tap the options key, select ''Port Forwards'' and add a port:<br />
Type: Local<br />
Source port: 5901<br />
Destination: 127.0.0.1:5901<br />
<br />
In ''androidVNC'' connect to the VNC port, this is the local address following the SSH connection:<br />
Password: the vncserver password<br />
Address: 127.0.0.1<br />
Port: 5901<br />
<br />
== Tips and tricks ==<br />
<br />
=== Connecting to an OSX system ===<br />
<br />
See https://help.ubuntu.com/community/AppleRemoteDesktop. Tested with Remmina.<br />
<br />
=== Recommended security settings ===<br />
<br />
If not [[#Accessing vncserver via SSH tunnels]] where the identification and the encryption are handled via SSH, it is recommended to use ''X509Vnc'', as ''TLSVnc'' lacks identity verification.<br />
<br />
$ vncserver -x509key ''/path/to/key.pem'' -x509cert ''/path/to/cert.pem'' -SecurityTypes X509Vnc :1<br />
<br />
Issuing x509 certificates is beyond the scope of this guide. However, [[wikipedia:Let's Encrypt|Let's Encrypt]] provides an easy way to do so. Alternatively, one can issue certificates using [[OpenSSL]], share the public key with the client and specify it with the {{ic|-X509CA}} parameter. An example is given below the server is running on 10.1.10.2:<br />
$ vncviewer 10.1.10.2 -X509CA ''/path/to/cert.pem''<br />
<br />
=== Toggling fullscreen ===<br />
<br />
This can be done through vnc client's menu. By default, vnc client's mkey is {{ic|F8}}.<br />
<br />
=== Workaround for mouse back and forward buttons not working ===<br />
<br />
The VNC protocol currently only uses 7 mouse buttons (left, middle, right, scroll up, scroll down, scroll left, scroll right) which means if your mouse has a back and a forward button these are not usable and input will be ignored.<br />
<br />
[https://www.bedroomlan.org/projects/evrouter/ evrouter] can be used to work around this limitation by sending keyboard key presses when clicking the mouse back/forward buttons. Optionally xte found in {{Pkg|xautomation}} and {{Pkg|xbindkeys}} can be used on the server to map the keyboard key presses back to mouse button clicks if needed.<br />
<br />
==== Substituting mouse back/forward buttons with keyboard keys XF86Back/XF86Forward ====<br />
<br />
This method is simple and suitable if you only need a way to navigate backward/forward while using web browsers or file browsers for example.<br />
<br />
Install {{AUR|evrouter}} and {{Pkg|xautomation}} on the client. Configure evrouter, see [[Mouse buttons#evrouter]] and evrouter man pages for instructions and tips on how to find the correct device name, window name, button names etc. Example config:<br />
{{hc|~/.evrouterrc|Window "OtherComputer:0 - TigerVNC": # Window title used as filter<br />
<br />
# Using Shell to avoid repeating key presses (see evrouter manual)<br />
"USB mouse" "/dev/input/by-id/usb-Mouse-name-event-mouse" none key/275 "Shell/xte 'key XF86Back'"<br />
"USB mouse" "/dev/input/by-id/usb-Mouse-name-event-mouse" none key/276 "Shell/xte 'key XF86Forward'"<br />
<br />
# Use XKey below instead if repeating keys is desired (see evrouter manual)<br />
#"Logitech Gaming Mouse G400" "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G400-event-mouse" none key/275 "XKey/XF86Back"<br />
#"Logitech Gaming Mouse G400" "/dev/input/by-id/usb-Logitech_Gaming_Mouse_G400-event-mouse" none key/276 "XKey/XF86Forward"}}<br />
<br />
Start evrouter on the client. With above configuration keyboard key XF86Back is sent to the VNC server when clicking the back button on the mouse, and XF86Forward is sent when clicking the forward button.<br />
<br />
==== Mapping the keyboard key presses back to mouse button clicks on the server ====<br />
<br />
If needed it is possible to map the keyboard keys back to mouse button clicks on the server. In this case it might be a good idea to use keyboard keys which are never on the client or server. In the example below keyboard keys XF86Launch8/XF86Launch9 are used as mouse buttons 8/9.<br />
<br />
Evrouter configuration on the client:<br />
{{hc|~/.evrouterrc|Window "OtherComputer:0 - TigerVNC": # Window title<br />
<br />
# Using Shell to avoid repeating key presses (see evrouter manual)<br />
"USB mouse" "/dev/input/by-id/usb-Mouse-name-event-mouse" none key/275 "Shell/xte 'key XF86Launch8'"<br />
"USB mouse" "/dev/input/by-id/usb-Mouse-name-event-mouse" none key/276 "Shell/xte 'key XF86Launch9'"}}<br />
<br />
Install {{Pkg|xautomation}} and {{Pkg|xbindkeys}} on the server. Configure xbindkeys to map keyboard keys XF86Launch8/XF86Launch9 to mouse buttons 8/9 with xte.<br />
{{hc|~/.xbindkeysrc|<br />
"xte 'mouseclick 8'"<br />
XF86Launch8<br />
<br />
"xte 'mouseclick 9'"<br />
XF86Launch9<br />
}}<br />
Start xbindkeys {{ic|$ xbindkeys -f ~/.xbindkeysrc}}. The server will now map XF86Launch8/XF86Launch9 to mouse buttons 8/9.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Terminals in vncserver start in / (root dir) ===<br />
<br />
This is a known issue introduced upstream. See: https://github.com/TigerVNC/tigervnc/issues/1108<br />
<br />
=== Unable to type '<' character ===<br />
<br />
If pressing {{ic|<}} on a remote client emits the {{ic|>}} character, try remapping the incoming key [https://insaner.com/blog/2013/05.html#20130422063137]{{Dead link|2020|04|03|status=404}}:<br />
<br />
$ x0vncserver -RemapKeys="0x3c->0x2c"<br />
<br />
=== Black rectangle instead of window ===<br />
<br />
Most probably this is due to the application strictly requiring the composite Xorg extension. For example webkit based app: midori, psi-plus, etc.<br />
<br />
Restart vncserver in this case using something like following:<br />
<br />
$ vncserver -geometry ... -depth 24 :1 +extension Composite<br />
<br />
It looks like Composite extension in VNC will work only with 24bit depth.<br />
<br />
=== Empty black window with mouse cursor ===<br />
<br />
Verify that the user is not logged into a physical X session, unless this option was configured with {{ic|x0vncserver}}. Multiple X sessions for a single user are not supported, see https://github.com/TigerVNC/tigervnc/issues/684#issuecomment-494385395.<br />
<br />
Conversely, trying to log into a local X session while a VNC server service is running for that user will likely not work, and you may get stuck on a splash screen when using a desktop environment.<br />
<br />
=== No mouse cursor ===<br />
<br />
If no mouse cursor is visible when using ''x0vncserver'', start ''vncviewer'' as follows:<br />
<br />
$ vncviewer DotWhenNoCursor=1 ''server''<br />
<br />
Alternatively, put {{ic|1=DotWhenNoCursor=1}} in the TigerVNC configuration file, which is at {{ic|~/.vnc/default.tigervnc}} by default.<br />
<br />
=== Copying clipboard content from the remote machine ===<br />
<br />
If copying from the remote machine to the local machine does not work, run {{ic|autocutsel}} on the server, as mentioned in [https://bbs.archlinux.org/viewtopic.php?id=101243]:<br />
<br />
$ autocutsel -fork<br />
<br />
Now press F8 to display the VNC menu popup, and select {{ic|Clipboard: local -> remote}} option.<br />
<br />
=== "Authentication is required to create a color managed device" dialog when launching GNOME 3 ===<br />
<br />
A workaround is to create a "vnc" group and add the gdm user and any other users using vnc to that group.<br />
Modify {{ic|/etc/polkit-1/rules.d/gnome-vnc.rules}} with the following[https://github.com/TurboVNC/turbovnc/issues/47]:<br />
<br />
polkit.addRule(function(action, subject) {<br />
if ((action.id == "org.freedesktop.color-manager.create-device" ||<br />
action.id == "org.freedesktop.color-manager.create-profile" ||<br />
action.id == "org.freedesktop.color-manager.delete-device" ||<br />
action.id == "org.freedesktop.color-manager.delete-profile" ||<br />
action.id == "org.freedesktop.color-manager.modify-device" ||<br />
action.id == "org.freedesktop.color-manager.modify-profile") &&<br />
subject.isInGroup("vnc")) {<br />
return polkit.Result.YES;<br />
}<br />
});<br />
<br />
=== No window decoration / borders / titlebars / cannot move windows around ===<br />
<br />
Start a window manager to fix an empty [[xterm]] frame. For example, on [[Xfce]], run {{ic|xfwm4 &}}.<br />
<br />
=== systemd service unit run as user ===<br />
<br />
Create the following template:<br />
<br />
{{hc|/usr/lib/systemd/system/tigervnc@.service|2=<br />
[Unit]<br />
Description=Remote desktop service (VNC)<br />
After=syslog.target network.target<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=/sbin/runuser -l USERNAME -c "/usr/bin/vncserver %i"<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
}}<br />
<br />
[[Start/enable]] {{ic|tigervnc@:9}} to run the template instance on display 9.<br />
<br />
=== desktop environment is displaying only boxes for font ===<br />
<br />
Some DE might be missing necessary font to display ASCII characters. Install {{pkg|ttf-dejavu}}. <br />
<br />
== See also ==<br />
<br />
* https://github.com/TigerVNC/tigervnc</div>Recolichttps://wiki.archlinux.org/index.php?title=Dm-crypt/Encrypting_an_entire_system&diff=696074Dm-crypt/Encrypting an entire system2021-09-16T15:11:46Z<p>Recolic: /* Plain dm-crypt */ Error fix: The format of (EFI) boot partition should be FAT32, instead of ext4.</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Data-at-rest encryption]]<br />
[[Category:Installation process]]<br />
[[de:Systemverschlüsselung mit dm-crypt]]<br />
[[es:Dm-crypt (Español)/Encrypting an entire system]]<br />
[[ja:Dm-crypt/システム全体の暗号化]]<br />
[[pl:Dm-crypt (Polski)/Encrypting an entire system]]<br />
[[pt:Dm-crypt (Português)/Encrypting an entire system]]<br />
The following are examples of common scenarios of full system encryption with ''dm-crypt''. They explain all the adaptations that need to be done to the normal [[Installation guide|installation procedure]]. All the necessary tools are on the [https://archlinux.org/download/ installation image].<br />
<br />
If you want to encrypt an existing unencrypted file system, see [[dm-crypt/Device encryption#Encrypt an existing unencrypted file system]].<br />
<br />
== Overview ==<br />
<br />
Securing a root filesystem is where ''dm-crypt'' excels, feature and performance-wise. Unlike selectively encrypting non-root filesystems, an encrypted root filesystem can conceal information such as which programs are installed, the usernames of all user accounts, and common data-leakage vectors such as [[mlocate]] and {{ic|/var/log/}}. Furthermore, an encrypted root filesystem makes tampering with the system far more difficult, as everything except the [[boot loader]] and (usually) the kernel is encrypted.<br />
<br />
All scenarios illustrated in the following share these advantages, other pros and cons differentiating them are summarized below:<br />
<br />
{| class="wikitable"<br />
! Scenarios<br />
! Advantages<br />
! Disadvantages<br />
|----------------------------------------------------------<br />
| [[#LUKS on a partition]]<br />
shows a basic and straightforward set-up for a fully LUKS encrypted root.<br />
|<br />
* Simple partitioning and setup<br />
* On a GPT partitioned disk, [[systemd#GPT partition automounting|systemd can auto-mount]] the root partition.<br />
|<br />
* Inflexible; disk-space to be encrypted has to be pre-allocated<br />
|----------------------------------------------------------<br />
| [[#LVM on LUKS]]<br />
achieves partitioning flexibility by using LVM inside a single LUKS encrypted partition.<br />
|<br />
* Simple partitioning with knowledge of LVM<br />
* Only one key required to unlock all volumes (e.g. easy resume-from-disk setup)<br />
* Volume layout not transparent when locked<br />
* Easiest method to allow [[dm-crypt/Swap encryption#With suspend-to-disk support|suspension to disk]]<br />
|<br />
* LVM adds an additional mapping layer and hook<br />
* Less useful, if a singular volume should receive a separate key<br />
|----------------------------------------------------------<br />
| [[#LUKS on LVM]]<br />
uses dm-crypt only after the LVM is setup.<br />
|<br />
* LVM can be used to have encrypted volumes span multiple disks<br />
* Easy mix of un-/encrypted volume groups<br />
|<br />
* Complex; changing volumes requires changing encryption mappers too<br />
* Volumes require individual keys<br />
* LVM layout is transparent when locked<br />
* Slower boot time; each encrypted LV must be unlocked seperately<br />
|----------------------------------------------------------<br />
| [[#LUKS on software RAID]]<br />
uses dm-crypt only after RAID is setup.<br />
|<br />
* Analogous to LUKS on LVM<br />
|<br />
* Analogous to LUKS on LVM<br />
|----------------------------------------------------------<br />
| [[#Plain dm-crypt]]<br />
uses dm-crypt plain mode, i.e. without a LUKS header and its options for multiple keys. <br>This scenario also employs USB devices for {{ic|/boot}} and key storage, which may be applied to the other scenarios.<br />
|<br />
* Data resilience for cases where a LUKS header may be damaged<br />
* Allows [[Wikipedia:Disk encryption#Full disk encryption|Full Disk Encryption]]<br />
* Helps addressing [[dm-crypt/Specialties#Discard/TRIM support for solid state drives (SSD)|problems]] with SSDs<br />
|<br />
* High care to all encryption parameters is required<br />
* Single encryption key and no option to change it<br />
|----------------------------------------------------------<br />
| [[#Encrypted boot partition (GRUB)]]<br />
shows how to encrypt the boot partition using the GRUB bootloader. <br> This scenario also employs an EFI system partition, which may be applied to the other scenarios.<br />
|<br />
* Same advantages as the scenario the installation is based on (LVM on LUKS for this particular example)<br />
* Less data is left unencrypted, i.e. the boot loader and the EFI system partition, if present<br />
|<br />
* Same disadvantages as the scenario the installation is based on (LVM on LUKS for this particular example)<br />
* More complicated configuration<br />
* Not supported by other boot loaders<br />
|----------------------------------------------------------<br />
| [[#Btrfs subvolumes with swap]]<br />
shows how to encrypt a [[Btrfs]] system, including the {{ic|/boot}} directory, also adding a partition for swap, on UEFI hardware.<br />
|<br />
* Similar advantages as [[#Encrypted boot partition (GRUB)]]<br />
* Availability of Btrfs' features<br />
|<br />
* Similar disadvantages as [[#Encrypted boot partition (GRUB)]]<br />
|----------------------------------------------------------<br />
| [[#Root on ZFS]]<br />
|<br />
|<br />
|}<br />
<br />
While all above scenarios provide much greater protection from outside threats than encrypted secondary filesystems, they also share a common disadvantage: any user in possession of the encryption key is able to decrypt the entire drive, and therefore can access other users' data. If that is of concern, it is possible to use a combination of blockdevice and stacked filesystem encryption and reap the advantages of both. See [[Data-at-rest encryption]] to plan ahead.<br />
<br />
See [[dm-crypt/Drive preparation#Partitioning]] for a general overview of the partitioning strategies used in the scenarios.<br />
<br />
Another area to consider is whether to set up an encrypted swap partition and what kind. See [[dm-crypt/Swap encryption]] for alternatives.<br />
<br />
If you anticipate to protect the system's data not only against physical theft, but also have a requirement of precautions against logical tampering, see [[dm-crypt/Specialties#Securing the unencrypted boot partition]] for further possibilities after following one of the scenarios.<br />
<br />
For [[solid state drive]]s you might want to consider enabling TRIM support, but be warned, there are potential security implications. See [[dm-crypt/Specialties#Discard/TRIM support for solid state drives (SSD)]] for more information.<br />
<br />
{{Warning|<br />
* In any scenario, never use file system repair software such as [[fsck]] directly on an encrypted volume, or it will destroy any means to recover the key used to decrypt your files. Such tools must be used on the decrypted (opened) device instead.<br />
* For the LUKS2 format:<br />
** GRUB's support for LUKS2 is limited; see [[GRUB#Encrypted /boot]] for details. Use LUKS1 ({{ic|1=cryptsetup luksFormat --type luks1}}) for partitions that GRUB will need to unlock.<br />
** The LUKS2 format has a high RAM usage per design, defaulting to 1GB per encrypted mapper. Machines with low RAM and/or multiple LUKS2 partitions unlocked in parallel may error on boot. See the {{ic|--pbkdf-memory}} option to control memory usage.[https://gitlab.com/cryptsetup/cryptsetup/issues/372]<br />
}}<br />
<br />
== LUKS on a partition ==<br />
<br />
This example covers a full system encryption with ''dm-crypt'' + LUKS in a simple partition layout:<br />
<br />
{{Text art|<nowiki><br />
+-----------------------+------------------------+-----------------------+<br />
| Boot partition | LUKS2 encrypted system | Optional free space |<br />
| | partition | for additional |<br />
| | | partitions to be set |<br />
| /boot | / | up later |<br />
| | | |<br />
| | /dev/mapper/cryptroot | |<br />
| |------------------------| |<br />
| /dev/sda1 | /dev/sda2 | |<br />
+-----------------------+------------------------+-----------------------+<br />
</nowiki>}}<br />
<br />
The first steps can be performed directly after booting the Arch Linux install image.<br />
<br />
=== Preparing the disk ===<br />
<br />
Prior to creating any partitions, you should inform yourself about the importance and methods to securely erase the disk, described in [[dm-crypt/Drive preparation]].<br />
<br />
Then create the needed partitions, at least one for {{ic|/}} (e.g. {{ic|/dev/sda2}}) and {{ic|/boot}} ({{ic|/dev/sda1}}). See [[Partitioning]].<br />
<br />
=== Preparing non-boot partitions ===<br />
<br />
The following commands create and mount the encrypted root partition. They correspond to the procedure described in detail in [[dm-crypt/Encrypting a non-root file system#Partition]] (which, despite the title, ''can'' be applied to root partitions, as long as [[#Configuring mkinitcpio|mkinitcpio]] and the [[#Configuring the boot loader|boot loader]] are correctly configured).<br />
If you want to use particular non-default encryption options (e.g. cipher, key length), see the [[dm-crypt/Device encryption#Encryption options for LUKS mode|encryption options]] before executing the first command. For information on changing the default sector size, see [[dm-crypt/Device encryption#Sector size]].<br />
<br />
# cryptsetup -y -v luksFormat /dev/sda2<br />
# cryptsetup open /dev/sda2 cryptroot<br />
# mkfs.ext4 /dev/mapper/cryptroot<br />
# mount /dev/mapper/cryptroot /mnt<br />
<br />
Check the mapping works as intended:<br />
<br />
# umount /mnt<br />
# cryptsetup close cryptroot<br />
# cryptsetup open /dev/sda2 cryptroot<br />
# mount /dev/mapper/cryptroot /mnt<br />
<br />
If you created separate partitions (e.g. {{ic|/home}}), these steps have to be adapted and repeated for all of them, ''except'' for {{ic|/boot}}. See [[dm-crypt/Encrypting a non-root file system#Automated unlocking and mounting]] on how to handle additional partitions at boot.<br />
<br />
Note that each blockdevice requires its own passphrase. This may be inconvenient, because it results in a separate passphrase to be input during boot. An alternative is to use a keyfile stored in the system partition to unlock the separate partition via {{ic|crypttab}}. See [[dm-crypt/Device encryption#Using LUKS to format partitions with a keyfile]] for instructions.<br />
<br />
=== Preparing the boot partition ===<br />
<br />
What you do have to setup is a non-encrypted {{ic|/boot}} partition, which is needed for an encrypted root. For an ordinary boot partition on BIOS systems, for example, execute:<br />
<br />
# mkfs.ext4 /dev/sda1<br />
<br />
or for an [[EFI system partition]] on UEFI systems:<br />
<br />
# mkfs.fat -F32 /dev/sda1<br />
<br />
Afterwards create the directory for the mountpoint and mount the partition:<br />
<br />
# mkdir /mnt/boot<br />
# mount /dev/sda1 /mnt/boot<br />
<br />
=== Mounting the devices ===<br />
<br />
At [[Installation guide#Mount the file systems]] you will have to mount the mapped devices, not the actual partitions. Of course {{ic|/boot}}, which is not encrypted, will still have to be mounted directly.<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
Add the {{ic|keyboard}}, {{ic|keymap}} and {{ic|encrypt}} hooks to [[mkinitcpio.conf]]. If the default US keymap is fine for you, you can omit the {{ic|keymap}} hook.<br />
<br />
HOOKS=(base '''udev''' autodetect '''keyboard''' '''keymap''' consolefont modconf block '''encrypt''' filesystems fsck)<br />
<br />
If using the [[sd-encrypt]] hook with the systemd-based initramfs, the following needs to be set instead:<br />
<br />
HOOKS=(base '''systemd''' autodetect '''keyboard''' '''sd-vconsole''' modconf block '''sd-encrypt''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details and other hooks that you may need.<br />
<br />
=== Configuring the boot loader ===<br />
<br />
In order to unlock the encrypted root partition at boot, the following [[kernel parameters]] need to be set by the boot loader:<br />
<br />
cryptdevice=UUID=''device-UUID'':cryptroot root=/dev/mapper/cryptroot<br />
<br />
If using the [[sd-encrypt]] hook, the following need to be set instead:<br />
<br />
rd.luks.name=''device-UUID''=cryptroot root=/dev/mapper/cryptroot<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] for details.<br />
<br />
The {{ic|''device-UUID''}} refers to the UUID of {{ic|/dev/sda2}}. See [[Persistent block device naming]] for details.<br />
<br />
== LVM on LUKS ==<br />
<br />
The straightforward method is to set up [[LVM]] on top of the encrypted partition instead of the other way round. Technically the LVM is setup inside one big encrypted blockdevice. Hence, the LVM is not transparent until the blockdevice is unlocked and the underlying volume structure is scanned and mounted during boot.<br />
<br />
The disk layout in this example is:<br />
<br />
{{Text art|<nowiki><br />
+-----------------------------------------------------------------------+ +----------------+<br />
| Logical volume 1 | Logical volume 2 | Logical volume 3 | | Boot partition |<br />
| | | | | |<br />
| [SWAP] | / | /home | | /boot |<br />
| | | | | |<br />
| /dev/MyVolGroup/swap | /dev/MyVolGroup/root | /dev/MyVolGroup/home | | |<br />
|_ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _| | (may be on |<br />
| | | other device) |<br />
| LUKS2 encrypted partition | | |<br />
| /dev/sda1 | | /dev/sdb1 |<br />
+-----------------------------------------------------------------------+ +----------------+<br />
</nowiki>}}<br />
<br />
{{Note|While using the {{ic|encrypt}} hook this method does not allow you to span the logical volumes over multiple disks; either use the [[sd-encrypt]] or see [[dm-crypt/Specialties#Modifying the encrypt hook for multiple partitions]].}}<br />
<br />
{{Tip|Two variants of this setup:<br />
* Instructions at [[dm-crypt/Specialties#Encrypted system using a detached LUKS header]] use this setup with a detached LUKS header on a USB device to achieve a two factor authentication with it.<br />
* Instructions at [[dm-crypt/Specialties#Encrypted /boot and a detached LUKS header on USB]] use this setup with a detached LUKS header, encrypted {{ic|/boot}} partition, and encrypted keyfile all on a USB device.<br />
}}<br />
<br />
=== Preparing the disk ===<br />
<br />
Prior to creating any partitions, you should inform yourself about the importance and methods to securely erase the disk, described in [[dm-crypt/Drive preparation]].<br />
<br />
{{Tip|When using the [[GRUB]] boot loader for BIOS booting from a [[GPT]] disk, create a [[BIOS boot partition]].}}<br />
<br />
[[Installation guide#Partition the disks|Create a partition]] to be mounted at {{ic|/boot}} with a size of 200 MiB or more.<br />
<br />
{{Tip|UEFI systems can use the [[EFI system partition]] for {{ic|/boot}}.}}<br />
<br />
Create a partition which will later contain the encrypted container.<br />
<br />
Create the LUKS encrypted container at the "system" partition. Enter the chosen password twice.<br />
<br />
# cryptsetup luksFormat /dev/sda1<br />
<br />
For more information about the available cryptsetup options see the [[dm-crypt/Device encryption#Encryption options for LUKS mode|LUKS encryption options]] prior to above command.<br />
<br />
Open the container:<br />
<br />
# cryptsetup open /dev/sda1 cryptlvm<br />
<br />
The decrypted container is now available at {{ic|/dev/mapper/cryptlvm}}.<br />
<br />
=== Preparing the logical volumes ===<br />
<br />
Create a physical volume on top of the opened LUKS container:<br />
<br />
# pvcreate /dev/mapper/cryptlvm<br />
<br />
Create a volume group (in this example named {{ic|MyVolGroup}}, but it can be whatever you want) and add the previously created physical volume to it:<br />
<br />
# vgcreate MyVolGroup /dev/mapper/cryptlvm<br />
<br />
Create all your logical volumes on the volume group:<br />
<br />
# lvcreate -L 8G MyVolGroup -n swap<br />
# lvcreate -L 32G MyVolGroup -n root<br />
# lvcreate -l 100%FREE MyVolGroup -n home<br />
<br />
Format your filesystems on each logical volume:<br />
<br />
# mkfs.ext4 /dev/MyVolGroup/root<br />
# mkfs.ext4 /dev/MyVolGroup/home<br />
# mkswap /dev/MyVolGroup/swap<br />
<br />
Mount your filesystems:<br />
<br />
# mount /dev/MyVolGroup/root /mnt<br />
# mkdir /mnt/home<br />
# mount /dev/MyVolGroup/home /mnt/home<br />
# swapon /dev/MyVolGroup/swap<br />
<br />
=== Preparing the boot partition ===<br />
<br />
The bootloader loads the kernel, [[initramfs]], and its own configuration files from the {{ic|/boot}} directory. Any filesystem on a disk that can be read by the bootloader is eligible.<br />
<br />
Create a [[filesystem]] on the partition intended for {{ic|/boot}}:<br />
<br />
# mkfs.ext4 /dev/sdb1<br />
<br />
{{Tip|When opting to keep {{ic|/boot}} on an [[EFI system partition]] the recommended formatting is<br />
<br />
# mkfs.fat -F32 /dev/sdb1<br />
<br />
}}<br />
<br />
Create the directory {{ic|/mnt/boot}}:<br />
<br />
# mkdir /mnt/boot<br />
<br />
Mount the partition to {{ic|/mnt/boot}}:<br />
<br />
# mount /dev/sdb1 /mnt/boot<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
Make sure the {{Pkg|lvm2}} package is [[install]]ed and add the {{ic|keyboard}}, {{ic|keymap}}, {{ic|encrypt}} and {{ic|lvm2}} hooks to [[mkinitcpio.conf]]:<br />
<br />
HOOKS=(base '''udev''' autodetect '''keyboard''' '''keymap''' consolefont modconf block '''encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
If using the [[sd-encrypt]] hook with the systemd-based initramfs, the following needs to be set instead:<br />
<br />
HOOKS=(base '''systemd''' autodetect '''keyboard''' '''sd-vconsole''' modconf block '''sd-encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details and other hooks that you may need.<br />
<br />
=== Configuring the boot loader ===<br />
<br />
In order to unlock the encrypted root partition at boot, the following kernel parameter needs to be set by the boot loader:<br />
<br />
cryptdevice=UUID=''device-UUID'':cryptlvm root=/dev/MyVolGroup/root<br />
<br />
If using the [[sd-encrypt]] hook, the following needs to be set instead:<br />
<br />
rd.luks.name=''device-UUID''=cryptlvm root=/dev/MyVolGroup/root<br />
<br />
The {{ic|''device-UUID''}} refers to the UUID of {{ic|/dev/sda1}}. See [[Persistent block device naming]] for details.<br />
<br />
If using [[dracut]], you may need a more extensive list of parameters, try:<br />
<br />
kernel_cmdline="rd.luks.uuid=luks-''deviceUUID'' rd.lvm.lv=''MyVolGroup''/root rd.lvm.lv=''MyVolGroup''/swap root=/dev/mapper/''MyVolGroup''-root rootfstype=ext4 rootflags=rw,relatime"<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] for details.<br />
<br />
== LUKS on LVM ==<br />
<br />
To use encryption on top of [[LVM]], the LVM volumes are set up first and then used as the base for the encrypted partitions. This way, a mixture of encrypted and non-encrypted volumes/partitions is possible as well.<br />
{{tip|Unlike [[#LVM on LUKS]], this method allows normally spanning the logical volumes over multiple disks. }}<br />
<br />
The following short example creates a LUKS on LVM setup and mixes in the use of a key-file for the /home partition and temporary crypt volumes for {{ic|/tmp}} and {{ic|/swap}}. The latter is considered desirable from a security perspective, because no potentially sensitive temporary data survives the reboot, when the encryption is re-initialised. If you are experienced with LVM, you will be able to ignore/replace LVM and other specifics according to your plan.<br />
<br />
If you want to span a logical volume over multiple disks that have already been set up, or expand the logical volume for {{ic|/home}} (or any other volume), a procedure to do so is described in [[dm-crypt/Specialties#Expanding LVM on multiple disks]]. It is important to note that the LUKS encrypted container has to be resized as well.<br />
<br />
{{Expansion|The intro of this scenario needs some adjustment now that a comparison has been added to [[#Overview]]. A suggested structure is to make it similar to the [[#LUKS on a partition]] intro.}}<br />
<br />
=== Preparing the disk ===<br />
<br />
Partitioning scheme:<br />
<br />
{{Text art|<nowiki><br />
+----------------+-------------------------------------------------------------------------------------------------+<br />
| Boot partition | dm-crypt plain encrypted volume | LUKS2 encrypted volume | LUKS2 encrypted volume |<br />
| | | | |<br />
| /boot | [SWAP] | / | /home |<br />
| | | | |<br />
| | /dev/mapper/swap | /dev/mapper/root | /dev/mapper/home |<br />
| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|<br />
| | Logical volume 1 | Logical volume 2 | Logical volume 3 |<br />
| | /dev/MyVolGroup/cryptswap | /dev/MyVolGroup/cryptroot | /dev/MyVolGroup/crypthome |<br />
| |_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _|<br />
| | |<br />
| /dev/sda1 | /dev/sda2 |<br />
+----------------+-------------------------------------------------------------------------------------------------+<br />
</nowiki>}}<br />
<br />
Randomise {{ic|/dev/sda2}} according to [[dm-crypt/Drive preparation#dm-crypt wipe on an empty disk or partition]].<br />
<br />
=== Preparing the logical volumes ===<br />
<br />
# pvcreate /dev/sda2<br />
# vgcreate MyVolGroup /dev/sda2<br />
# lvcreate -L 32G -n cryptroot MyVolGroup<br />
# lvcreate -L 500M -n cryptswap MyVolGroup<br />
# lvcreate -L 500M -n crypttmp MyVolGroup<br />
# lvcreate -l 100%FREE -n crypthome MyVolGroup<br />
<br />
# cryptsetup luksFormat /dev/MyVolGroup/cryptroot<br />
# cryptsetup open /dev/MyVolGroup/cryptroot root<br />
# mkfs.ext4 /dev/mapper/root<br />
# mount /dev/mapper/root /mnt<br />
<br />
More information about the encryption options can be found in [[dm-crypt/Device encryption#Encryption options for LUKS mode]].<br />
Note that {{ic|/home}} will be encrypted in [[#Encrypting logical volume /home]].<br />
<br />
{{Tip|If you ever have to access the encrypted root from the Arch-ISO, the above {{ic|open}} action will allow you to after the [[LVM#Logical Volumes do not show up|LVM shows up]].}}<br />
<br />
=== Preparing the boot partition ===<br />
<br />
# dd if=/dev/zero of=/dev/sda1 bs=1M status=progress<br />
# mkfs.ext4 /dev/sda1<br />
# mkdir /mnt/boot<br />
# mount /dev/sda1 /mnt/boot<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
Make sure the {{Pkg|lvm2}} package is [[install]]ed and add the {{ic|keyboard}}, {{ic|lvm2}} and {{ic|encrypt}} hooks to [[mkinitcpio.conf]]:<br />
<br />
HOOKS=(base '''udev''' autodetect '''keyboard''' '''keymap''' consolefont modconf block '''lvm2''' '''encrypt''' filesystems fsck)<br />
<br />
If using the [[sd-encrypt]] hook with the systemd-based initramfs, the following needs to be set instead:<br />
<br />
HOOKS=(base '''systemd''' autodetect '''keyboard''' '''sd-vconsole''' modconf block '''sd-encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details and other hooks that you may need.<br />
<br />
=== Configuring the boot loader ===<br />
<br />
In order to unlock the encrypted root partition at boot, the following [[kernel parameters]] need to be set by the boot loader:<br />
<br />
cryptdevice=UUID=''device-UUID'':root root=/dev/mapper/root<br />
<br />
If using the [[sd-encrypt]] hook, the following need to be set instead:<br />
<br />
rd.luks.name=''device-UUID''=root root=/dev/mapper/root<br />
<br />
The {{ic|''device-UUID''}} refers to the UUID of {{ic|/dev/MyVolGroup/cryptroot}}. See [[Persistent block device naming]] for details.<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] for details.<br />
<br />
=== Configuring fstab and crypttab ===<br />
<br />
Both [[crypttab]] and [[fstab]] entries are required to both unlock the device and mount the filesystems, respectively. The following lines will re-encrypt the temporary filesystems on each reboot:<br />
<br />
{{hc|/etc/crypttab|2=<br />
swap /dev/MyVolGroup/cryptswap /dev/urandom swap,cipher=aes-xts-plain64,size=256<br />
tmp /dev/MyVolGroup/crypttmp /dev/urandom tmp,cipher=aes-xts-plain64,size=256<br />
}}<br />
<br />
{{hc|/etc/fstab|<br />
/dev/mapper/root / ext4 defaults 0 1<br />
/dev/sda1 /boot ext4 defaults 0 2<br />
/dev/mapper/tmp /tmp tmpfs defaults 0 0<br />
/dev/mapper/swap none swap sw 0 0<br />
}}<br />
<br />
=== Encrypting logical volume /home ===<br />
<br />
Since this scenario uses LVM as the primary and dm-crypt as secondary mapper, each encrypted logical volume requires its own encryption. Yet, unlike the temporary filesystems configured with volatile encryption above, the logical volume for {{ic|/home}} should of course be persistent. The following assumes you have rebooted into the installed system, otherwise you have to adjust paths.<br />
To save on entering a second passphrase at boot, a [[dm-crypt/Device encryption#Keyfiles|keyfile]] is created:<br />
<br />
# mkdir -m 700 /etc/luks-keys<br />
# dd if=/dev/random of=/etc/luks-keys/home bs=1 count=256 status=progress<br />
<br />
The logical volume is encrypted with it:<br />
<br />
# cryptsetup luksFormat -v /dev/MyVolGroup/crypthome /etc/luks-keys/home<br />
# cryptsetup -d /etc/luks-keys/home open /dev/MyVolGroup/crypthome home<br />
# mkfs.ext4 /dev/mapper/home<br />
# mount /dev/mapper/home /home<br />
<br />
The encrypted mount is configured in both [[crypttab]] and [[fstab]]:<br />
<br />
{{hc|/etc/crypttab|<br />
home /dev/MyVolGroup/crypthome /etc/luks-keys/home<br />
}}<br />
<br />
{{hc|/etc/fstab|<br />
/dev/mapper/home /home ext4 defaults 0 2<br />
}}<br />
<br />
== LUKS on software RAID ==<br />
<br />
This example is based on a real-world setup for a workstation class laptop equipped with two SSDs of equal size, and an additional HDD for bulk storage. The end result is LUKS1 based full disk encryption (including {{ic|/boot}}) for all drives, with the SSDs in a [[RAID|RAID0]] array, and keyfiles used to unlock all encryption after [[GRUB]] is given a correct passphrase at boot.<br />
<br />
This setup utilizes a very simplistic partitioning scheme, with all the available RAID storage being mounted at {{ic|/}} (no separate {{ic|/boot}} partition), and the decrypted HDD being mounted at {{ic|/data}}.<br />
<br />
Please note that regular [[System backup|backups]] are very important in this setup. If either of the SSDs fail, the data contained in the RAID array will be practically impossible to recover. You may wish to select a different [[RAID#Standard RAID levels|RAID level]] if fault tolerance is important to you. <br />
<br />
The encryption is not deniable in this setup.<br />
<br />
For the sake of the instructions below, the following block devices are used:<br />
<br />
/dev/sda = first SSD<br />
/dev/sdb = second SSD<br />
/dev/sdc = HDD<br />
<br />
{{Text art|<nowiki><br />
+---------------------+---------------------------+---------------------------+ +---------------------+---------------------------+---------------------------+ +---------------------------+<br />
| BIOS boot partition | EFI system partition | LUKS1 encrypted volume | | BIOS boot partition | EFI system partition | LUKS1 encrypted volume | | LUKS2 encrypted volume |<br />
| | | | | | | | | |<br />
| | /efi | / | | | /efi | / | | /data |<br />
| | | | | | | | | |<br />
| | | /dev/mapper/cryptroot | | | | /dev/mapper/cryptroot | | |<br />
| +---------------------------+---------------------------+ | +---------------------------+---------------------------+ | |<br />
| | RAID1 array (part 1 of 2) | RAID0 array (part 1 of 2) | | | RAID1 array (part 2 of 2) | RAID0 array (part 2 of 2) | | |<br />
| | | | | | | | | |<br />
| | /dev/md/ESP | /dev/md/root | | | /dev/md/ESP | /dev/md/root | | /dev/mapper/cryptdata |<br />
| +---------------------------+---------------------------+ | +---------------------------+---------------------------+ +---------------------------+<br />
| /dev/sda1 | /dev/sda2 | /dev/sda3 | | /dev/sdb1 | /dev/sdb2 | /dev/sdb3 | | /dev/sdc1 |<br />
+---------------------+---------------------------+---------------------------+ +---------------------+---------------------------+---------------------------+ +---------------------------+<br />
</nowiki>}}<br />
<br />
Be sure to substitute them with the appropriate device designations for your setup, as they may be different.<br />
<br />
=== Preparing the disks ===<br />
<br />
Prior to creating any partitions, you should inform yourself about the importance and methods to securely erase the disk, described in [[dm-crypt/Drive preparation]].<br />
<br />
For [[GRUB#BIOS systems|BIOS systems]] with GPT, create a [[BIOS boot partition]] with size of 1 MiB for GRUB to store the second stage of BIOS bootloader. Do not mount the partition.<br />
<br />
For [[GRUB#UEFI systems|UEFI systems]] create an [[EFI system partition]] with an appropriate size, it will later be mounted at {{ic|/efi}}.<br />
<br />
In the remaining space on the drive create a partition ({{ic|/dev/sda3}} in this example) for "Linux RAID". Choose partition type ID {{ic|fd}} for MBR or partition type GUID {{ic|A19D880F-05FC-4D3B-A006-743F0F84911E}} for GPT.<br />
<br />
Once partitions have been created on {{ic|/dev/sda}}, the following commands can be used to clone them to {{ic|/dev/sdb}}.<br />
<br />
# sfdisk -d /dev/sda > sda.dump<br />
# sfdisk /dev/sdb < sda.dump<br />
<br />
The HDD is prepared with a single Linux partition covering the whole drive at {{ic|/dev/sdc1}}.<br />
<br />
=== Building the RAID array ===<br />
<br />
Create the RAID array for the SSDs.<br />
<br />
{{Note|<br />
* All parts of an EFI system partition RAID array must be individually usable, that means that ESP can only placed in a RAID1 array.<br />
* The RAID superblock must be placed at the end of the EFI system partition using {{ic|1=--metadata=1.0}}, otherwise the firmware will not be able to access the partition.<br />
}}<br />
<br />
# mdadm --create --verbose --level=1 --metadata=1.0 --raid-devices=2 /dev/md/ESP /dev/sda2 /dev/sdb2<br />
<br />
This example utilizes RAID0 for root, you may wish to substitute a different level based on your preferences or requirements.<br />
<br />
# mdadm --create --verbose --level=0 --metadata=1.2 --raid-devices=2 /dev/md/root /dev/sda3 /dev/sdb3<br />
<br />
=== Preparing the block devices ===<br />
<br />
As explained in [[dm-crypt/Drive preparation]], the devices are wiped with random data utilizing {{ic|/dev/zero}} and a crypt device with a random key. Alternatively, you could use {{ic|dd}} with {{ic|/dev/random}} or {{ic|/dev/urandom}}, though it will be much slower.<br />
<br />
# cryptsetup open --type plain /dev/md/root container --key-file /dev/random<br />
# dd if=/dev/zero of=/dev/mapper/container bs=1M status=progress<br />
# cryptsetup close container<br />
<br />
And repeat above for the HDD ({{ic|/dev/sdc1}} in this example).<br />
<br />
Set up encryption for {{ic|/dev/md/root}}:<br />
<br />
{{Warning|GRUB's support for LUKS2 is limited; see [[GRUB#Encrypted /boot]] for details. Use LUKS1 ({{ic|1=cryptsetup luksFormat --type luks1}}) for partitions that GRUB will need to unlock.}}<br />
<br />
# cryptsetup -y -v luksFormat --type luks1 /dev/md/root<br />
# cryptsetup open /dev/md/root cryptroot<br />
# mkfs.ext4 /dev/mapper/cryptroot<br />
# mount /dev/mapper/cryptroot /mnt<br />
<br />
And repeat for the HDD:<br />
<br />
# cryptsetup -y -v luksFormat /dev/sdc1<br />
# cryptsetup open /dev/sdc1 cryptdata<br />
# mkfs.ext4 /dev/mapper/cryptdata<br />
# mkdir /mnt/data<br />
# mount /dev/mapper/cryptdata /mnt/data<br />
<br />
For UEFI systems, set up the EFI system partition:<br />
<br />
# mkfs.fat -F32 /dev/md/ESP<br />
# mount /dev/md/ESP /mnt/efi<br />
<br />
=== Configuring GRUB ===<br />
<br />
Configure [[GRUB]] for the LUKS1 encrypted system by editing {{ic|/etc/default/grub}} with the following:<br />
<br />
GRUB_CMDLINE_LINUX="cryptdevice=/dev/md/root:cryptroot"<br />
GRUB_ENABLE_CRYPTODISK=y<br />
<br />
{{Move|GRUB#Troubleshooting|GRUB troubleshooting issues belong in the [[GRUB]] page. It should be moved there and simply linked from this section.}}<br />
<br />
If you have a USB keyboard on a newer system either enable legacy USB support in firmware or add the following to {{ic|/etc/default/grub}}:<br />
<br />
GRUB_TERMINAL_INPUT="usb_keyboard"<br />
GRUB_PRELOAD_MODULES="usb usb_keyboard ohci uhci ehci"<br />
<br />
Otherwise you may not be able to use your keyboard at the LUKS prompt.<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] and [[GRUB#Encrypted /boot]] for details.<br />
<br />
Complete the GRUB install to both SSDs (in reality, installing only to {{ic|/dev/sda}} will work).<br />
<br />
# grub-install --target=i386-pc /dev/sda<br />
# grub-install --target=i386-pc /dev/sdb<br />
# grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
=== Creating the keyfiles ===<br />
<br />
The next steps save you from entering your passphrase twice when you boot the system (once so GRUB can unlock the LUKS1 device, and second time once the initramfs assumes control of the system). This is done by creating a [[dm-crypt/Device encryption#Keyfiles|keyfile]] for the encryption and adding it to the initramfs image to allow the encrypt hook to unlock the root device. See [[dm-crypt/Device encryption#With a keyfile embedded in the initramfs]] for details.<br />
<br />
* Create the [[dm-crypt/Device encryption#Keyfiles|keyfile]] and add the key to {{ic|/dev/md/root}}.<br />
* Create another keyfile for the HDD ({{ic|/dev/sdc1}}) so it can also be unlocked at boot. For convenience, leave the passphrase created above in place as this can make recovery easier if you ever need it. Edit {{ic|/etc/crypttab}} to decrypt the HDD at boot. See [[Dm-crypt/System configuration#Unlocking with a keyfile]].<br />
<br />
=== Configuring the system ===<br />
<br />
Edit [[fstab]] to mount the cryptroot and cryptdata block devices and the ESP:<br />
<br />
/dev/mapper/cryptroot / ext4 rw,noatime 0 1<br />
/dev/mapper/cryptdata /data ext4 defaults 0 2<br />
/dev/md/ESP /efi vfat rw,relatime,codepage=437,iocharset=iso8859-1,shortname=mixed,utf8,tz=UTC,errors=remount-ro 0 2<br />
<br />
Save the RAID configuration:<br />
<br />
# mdadm --detail --scan >> /etc/mdadm.conf<br />
<br />
Edit [[mkinitcpio.conf]] to include your keyfile and add the proper hooks:<br />
<br />
FILES=(/crypto_keyfile.bin)<br />
HOOKS=(base udev autodetect '''keyboard''' '''keymap''' consolefont modconf block '''mdadm_udev''' '''encrypt''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details.<br />
<br />
== Plain dm-crypt ==<br />
<br />
Contrary to LUKS, dm-crypt ''plain'' mode does not require a header on the encrypted device: this scenario exploits this feature to set up a system on an unpartitioned, encrypted disk that will be indistinguishable from a disk filled with random data, which could allow [[Wikipedia:Deniable encryption|deniable encryption]]. See also [[wikipedia:Disk encryption#Full disk encryption]].<br />
<br />
Note that if full-disk encryption is not required, the methods using LUKS described in the sections above are better options for both system encryption and encrypted partitions. LUKS features like key management with multiple passphrases/key-files or re-encrypting a device in-place are unavailable with ''plain'' mode.<br />
<br />
''Plain'' dm-crypt encryption can be more resilient to damage than LUKS, because it does not rely on an encryption master-key which can be a single-point of failure if damaged. However, using ''plain'' mode also requires more manual configuration of encryption options to achieve the same cryptographic strength. See also [[Data-at-rest encryption#Cryptographic metadata]]. Using ''plain'' mode could also be considered if concerned with the problems explained in [[dm-crypt/Specialties#Discard/TRIM support for solid state drives (SSD)]].<br />
<br />
{{Tip|If headerless encryption is your goal but you are unsure about the lack of key-derivation with ''plain'' mode, then two alternatives are:<br />
* [[dm-crypt/Specialties#Encrypted system using a detached LUKS header|dm-crypt LUKS mode with a detached header]] by using the ''cryptsetup'' {{ic|--header}} option. It cannot be used with the standard ''encrypt'' hook, but the hook may be modified.<br />
* [[tcplay]] which offers headerless encryption but with the PBKDF2 function.<br />
}}<br />
<br />
The scenario uses two USB sticks:<br />
<br />
* one for the boot device, which also allows storing the options required to open/unlock the plain encrypted device in the boot loader configuration, since typing them on each boot would be error prone;<br />
* another for the encryption key file, assuming it stored as raw bits so that to the eyes of an unaware attacker who might get the usbkey the encryption key will appear as random data instead of being visible as a normal file. See also [[Wikipedia:Security through obscurity]], follow [[dm-crypt/Device encryption#Keyfiles]] to prepare the keyfile.<br />
<br />
The disk layout is:<br />
<br />
{{Text art|<nowiki><br />
+----------------------+----------------------+----------------------+ +----------------+ +----------------+<br />
| Logical volume 1 | Logical volume 2 | Logical volume 3 | | Boot device | | Encryption key |<br />
| | | | | | | file storage |<br />
| / | [SWAP] | /home | | /boot | | (unpartitioned |<br />
| | | | | | | in example) |<br />
| /dev/MyVolGroup/root | /dev/MyVolGroup/swap | /dev/MyVolGroup/home | | /dev/sdb1 | | /dev/sdc |<br />
|----------------------+----------------------+----------------------| |----------------| |----------------|<br />
| disk drive /dev/sda encrypted using plain mode and LVM | | USB stick 1 | | USB stick 2 |<br />
+--------------------------------------------------------------------+ +----------------+ +----------------+<br />
</nowiki>}}<br />
<br />
{{Tip|<br />
* It is also possible to use a single USB key physical device:<br />
** By putting the key on another partition (/dev/sdb2) of the USB storage device (/dev/sdb).<br />
** By copying the keyfile to the initramfs directly. An example keyfile {{ic|/etc/keyfile}} gets copied to the initramfs image by setting {{ic|1=FILES=(/etc/keyfile)}} in {{ic|/etc/mkinitcpio.conf}}. The way to instruct the {{ic|encrypt}} hook to read the keyfile in the initramfs image is using {{ic|rootfs:}} prefix before the filename, e.g. {{ic|1=cryptkey=rootfs:/etc/keyfile}}.<br />
* Another option is using a passphrase with good [[Security#Choosing secure passwords|entropy]].<br />
}}<br />
<br />
=== Preparing the disk ===<br />
<br />
It is vital that the mapped device is filled with random data. In particular this applies to the scenario use case we apply here.<br />
<br />
See [[dm-crypt/Drive preparation]] and [[dm-crypt/Drive preparation#dm-crypt specific methods]]<br />
<br />
=== Preparing the non-boot partitions ===<br />
<br />
See [[dm-crypt/Device encryption#Encryption options for plain mode]] for details.<br />
<br />
Using the device {{ic|/dev/sda}}, with the aes-xts cipher with a 512 bit key size and using a keyfile we have the following options for this scenario:<br />
<br />
# cryptsetup --cipher=aes-xts-plain64 --offset=0 --key-file=/dev/sdc --key-size=512 open --type plain /dev/sda cryptlvm<br />
<br />
Unlike encrypting with LUKS, the above command must be executed ''in full'' whenever the mapping needs to be re-established, so it is important to remember the cipher, and key file details.<br />
<br />
We can now check a mapping entry has been made for {{ic|/dev/mapper/cryptlvm}}:<br />
<br />
# fdisk -l<br />
<br />
{{Tip|A simpler alternative to using LVM, advocated in the cryptsetup FAQ for cases where LVM is not necessary, is to just create a filesystem on the entirety of the mapped dm-crypt device.}} <br />
<br />
Next, we setup [[LVM]] logical volumes on the mapped device. See [[Install Arch Linux on LVM]] for further details:<br />
<br />
# pvcreate /dev/mapper/cryptlvm<br />
# vgcreate MyVolGroup /dev/mapper/cryptlvm<br />
# lvcreate -L 32G MyVolGroup -n root<br />
# lvcreate -L 10G MyVolGroup -n swap<br />
# lvcreate -l 100%FREE MyVolGroup -n home<br />
<br />
We format and mount them and activate swap. See [[File systems#Create a file system]] for further details:<br />
<br />
# mkfs.ext4 /dev/MyVolGroup/root<br />
# mkfs.ext4 /dev/MyVolGroup/home<br />
# mount /dev/MyVolGroup/root /mnt<br />
# mkdir /mnt/home<br />
# mount /dev/MyVolGroup/home /mnt/home<br />
# mkswap /dev/MyVolGroup/swap<br />
# swapon /dev/MyVolGroup/swap<br />
<br />
=== Preparing the boot partition ===<br />
<br />
The {{ic|/boot}} partition can be installed on the standard vfat partition of a USB stick, if required. But if manual partitioning is needed, then a small 200 MiB partition is all that is required. Create the partition using a [[Partitioning#Partitioning tools|partitioning tool]] of your choice.<br />
<br />
Create a [[filesystem]] on the partition intended for {{ic|/boot}}:<br />
<br />
# mkfs.fat -F32 /dev/sdb1<br />
# mkdir /mnt/boot<br />
# mount /dev/sdb1 /mnt/boot<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
Make sure the {{Pkg|lvm2}} package is [[install]]ed and add the {{ic|keyboard}}, {{ic|keymap}}, {{ic|encrypt}} and {{ic|lvm2}} hooks to [[mkinitcpio.conf]]:<br />
<br />
HOOKS=(base udev autodetect '''keyboard''' '''keymap''' consolefont modconf block '''encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details and other hooks that you may need.<br />
<br />
=== Configuring the boot loader ===<br />
<br />
In order to boot the encrypted root partition, the following [[kernel parameters]] need to be set by the boot loader (note that 64 is the number of bytes in 512 bits):<br />
<br />
cryptdevice=/dev/disk/by-id/''disk-ID-of-sda'':cryptlvm cryptkey=/dev/disk/by-id/''disk-ID-of-sdc'':0:64 crypto=:aes-xts-plain64:512:0:<br />
<br />
The {{ic|''disk-ID-of-'''disk'''''}} refers to the id of the referenced disk. See [[Persistent block device naming]] for details.<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] for details and other parameters that you may need.<br />
<br />
{{Tip|If using GRUB, you can install it on the same USB as the {{ic|/boot}} partition with:<br />
<br />
# grub-install --recheck /dev/sdb<br />
<br />
}}<br />
<br />
=== Post-installation ===<br />
<br />
You may wish to remove the USB sticks after booting. Since the {{ic|/boot}} partition is not usually needed, the {{ic|noauto}} option can be added to the relevant line in {{ic|/etc/fstab}}:<br />
<br />
{{hc|/etc/fstab|<br />
# /dev/sdb1<br />
/dev/sdb1 /boot vfat '''noauto''',rw,noatime 0 2<br />
}}<br />
<br />
However, when an update to anything used in the initramfs, or a kernel, or the bootloader is required; the {{ic|/boot}} partition must be present and mounted. As the entry in {{ic|fstab}} already exists, it can be mounted simply with:<br />
<br />
# mount /boot<br />
<br />
== Encrypted boot partition (GRUB) ==<br />
<br />
This setup utilizes the same partition layout and configuration as the previous [[#LVM on LUKS]] section, with the difference that the [[GRUB]] boot loader is used since it is capable of booting from an LVM logical volume and a LUKS1-encrypted {{ic|/boot}}. See also [[GRUB#Encrypted /boot]].<br />
<br />
The disk layout in this example is:<br />
<br />
{{Text art|<nowiki><br />
+---------------------+----------------------+----------------------+----------------------+----------------------+<br />
| BIOS boot partition | EFI system partition | Logical volume 1 | Logical volume 2 | Logical volume 3 |<br />
| | | | | |<br />
| | /efi | / | [SWAP] | /home |<br />
| | | | | |<br />
| | | /dev/MyVolGroup/root | /dev/MyVolGroup/swap | /dev/MyVolGroup/home |<br />
| /dev/sda1 | /dev/sda2 |----------------------+----------------------+----------------------+<br />
| unencrypted | unencrypted | /dev/sda3 encrypted using LVM on LUKS1 |<br />
+---------------------+----------------------+--------------------------------------------------------------------+<br />
</nowiki>}}<br />
<br />
{{Tip|<br />
* All scenarios are intended as examples. It is, of course, possible to apply both of the two above distinct installation steps with the other scenarios as well. See also the variants linked in [[#LVM on LUKS]].<br />
* You can use {{ic|cryptboot}} script from {{AUR|cryptboot}} package for simplified encrypted boot management (mounting, unmounting, upgrading packages) and as a defense against [https://www.schneier.com/blog/archives/2009/10/evil_maid_attac.html Evil Maid] attacks with [[Secure Boot#Using your own keys|UEFI Secure Boot]]. For more information and limitations see [https://github.com/xmikos/cryptboot cryptboot project] page.<br />
}}<br />
<br />
=== Preparing the disk ===<br />
<br />
Prior to creating any partitions, you should inform yourself about the importance and methods to securely erase the disk, described in [[dm-crypt/Drive preparation]].<br />
<br />
For [[GRUB#GUID Partition Table (GPT) specific instructions|BIOS/GPT systems]] create a [[BIOS boot partition]] with size of 1 MiB for GRUB to store the second stage of BIOS bootloader. Do not mount the partition. For BIOS/MBR systems this is not necessary.<br />
<br />
For [[GRUB#UEFI systems|UEFI systems]] create an [[EFI system partition]] with an appropriate size, it will later be mounted at {{ic|/efi}}.<br />
<br />
Create a partition of type {{ic|8309}}, which will later contain the encrypted container for the LVM.<br />
<br />
Create the LUKS encrypted container:<br />
<br />
{{Warning|GRUB's support for LUKS2 is limited; see [[GRUB#Encrypted /boot]] for details. Use LUKS1 ({{ic|1=cryptsetup luksFormat --type luks1}}) for partitions that GRUB will need to unlock.}}<br />
<br />
# cryptsetup luksFormat --type luks1 /dev/sda3<br />
<br />
For more information about the available cryptsetup options see the [[dm-crypt/Device encryption#Encryption options for LUKS mode|LUKS encryption options]] prior to above command.<br />
<br />
Your partition layout should look similar to this:<br />
<br />
{{hc|# gdisk -l /dev/sda|<br />
...<br />
Number Start (sector) End (sector) Size Code Name<br />
1 2048 4095 1024.0 KiB EF02 BIOS boot partition<br />
2 4096 1130495 550.0 MiB EF00 EFI System<br />
3 1130496 68239360 32.0 GiB 8309 Linux LUKS<br />
}}<br />
<br />
Open the container:<br />
<br />
# cryptsetup open /dev/sda3 cryptlvm<br />
<br />
The decrypted container is now available at {{ic|/dev/mapper/cryptlvm}}.<br />
<br />
=== Preparing the logical volumes ===<br />
<br />
The LVM logical volumes of this example follow the exact layout as the [[#LVM on LUKS]] scenario. Therefore, please follow [[#Preparing the logical volumes]] above and adjust as required.<br />
<br />
If you plan to boot in UEFI mode, create a mountpoint for the [[EFI system partition]] at {{ic|/efi}} for compatibility with {{ic|grub-install}} and mount it:<br />
<br />
# mkdir /mnt/efi<br />
# mount /dev/sda2 /mnt/efi<br />
<br />
At this point, you should have the following partitions and logical volumes inside of {{ic|/mnt}}:<br />
<br />
{{hc|$ lsblk|<br />
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT<br />
sda 8:0 0 200G 0 disk<br />
├─sda1 8:1 0 1M 0 part<br />
├─sda2 8:2 0 550M 0 part /mnt/efi<br />
└─sda3 8:3 0 100G 0 part<br />
└─cryptlvm 254:0 0 100G 0 crypt<br />
├─MyVolGroup-swap 254:1 0 8G 0 lvm [SWAP]<br />
├─MyVolGroup-root 254:2 0 32G 0 lvm /mnt<br />
└─MyVolGroup-home 254:3 0 60G 0 lvm /mnt/home<br />
}}<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
Make sure the {{Pkg|lvm2}} package is [[install]]ed and add the {{ic|keyboard}}, {{ic|keymap}}, {{ic|encrypt}} and {{ic|lvm2}} hooks to [[mkinitcpio.conf]]:<br />
<br />
HOOKS=(base '''udev''' autodetect '''keyboard''' '''keymap''' consolefont modconf block '''encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
If using the [[sd-encrypt]] hook with the systemd-based initramfs, the following needs to be set instead:<br />
<br />
HOOKS=(base '''systemd''' autodetect '''keyboard''' '''sd-vconsole''' modconf block '''sd-encrypt''' '''lvm2''' filesystems fsck)<br />
<br />
See [[dm-crypt/System configuration#mkinitcpio]] for details and other hooks that you may need.<br />
<br />
=== Configuring GRUB ===<br />
<br />
Configure GRUB to allow booting from {{ic|/boot}} on a LUKS1 encrypted partition:<br />
<br />
{{hc|/etc/default/grub|2=<br />
GRUB_ENABLE_CRYPTODISK=y<br />
}}<br />
<br />
Set the kernel parameters, so that the initramfs can unlock the encrypted root partition. Using the {{ic|encrypt}} hook:<br />
<br />
{{hc|/etc/default/grub|2=<br />
GRUB_CMDLINE_LINUX="... cryptdevice=UUID=''device-UUID'':cryptlvm ..."<br />
}}<br />
<br />
If using the [[sd-encrypt]] hook, the following need to be set instead:<br />
<br />
{{hc|/etc/default/grub|2=<br />
GRUB_CMDLINE_LINUX="... rd.luks.name=''device-UUID''=cryptlvm ..."<br />
}}<br />
<br />
See [[dm-crypt/System configuration#Boot loader]] and [[GRUB#Encrypted /boot]] for details. The {{ic|''device-UUID''}} refers to the UUID of {{ic|/dev/sda3}} (the partition which holds the lvm containing the root filesystem). See [[Persistent block device naming]].<br />
<br />
[[GRUB#Installation_2|install GRUB]] to the mounted ESP for UEFI booting:<br />
<br />
# grub-install --target=x86_64-efi --efi-directory=/efi --bootloader-id=GRUB --recheck<br />
<br />
[[GRUB#Installation|install GRUB]] to the disk for BIOS booting:<br />
<br />
# grub-install --target=i386-pc --recheck /dev/sda<br />
<br />
Generate GRUB's [[GRUB#Generate the main configuration file|configuration]] file:<br />
<br />
# grub-mkconfig -o /boot/grub/grub.cfg<br />
<br />
If all commands finished without errors, GRUB should prompt for the passphrase to unlock the {{ic|/dev/sda3}} partition after the next reboot.<br />
<br />
=== Avoiding having to enter the passphrase twice ===<br />
<br />
{{Merge|Dm-crypt/Device encryption#With a keyfile embedded in the initramfs|Too much duplicated content, too much detail here for this overview page.|section=Security Issue with Grub Keyfile}}<br />
<br />
While GRUB asks for a passphrase to unlock the LUKS1 encrypted partition after above instructions, the partition unlock is not passed on to the initramfs. Hence, you have to enter the passphrase twice at boot: once for GRUB and once for the initramfs.<br />
<br />
This section deals with extra configuration to let the system boot by only entering the passphrase once, in GRUB. This is accomplished by [[dm-crypt/Device encryption#With a keyfile embedded in the initramfs|with a keyfile embedded in the initramfs]].<br />
<br />
First create a keyfile and add it as LUKS key:<br />
<br />
# dd bs=512 count=4 if=/dev/random of=/root/cryptlvm.keyfile iflag=fullblock<br />
# chmod 000 /root/cryptlvm.keyfile<br />
# cryptsetup -v luksAddKey /dev/sda3 /root/cryptlvm.keyfile<br />
<br />
Add the keyfile to the initramfs image:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
FILES=(/root/cryptlvm.keyfile)<br />
}}<br />
<br />
Recreate the initramfs image and secure the embedded keyfile:<br />
<br />
# chmod 600 /boot/initramfs-linux*<br />
<br />
Set the following kernel parameters to unlock the LUKS partition with the keyfile. Using the {{ic|encrypt}} hook:<br />
<br />
GRUB_CMDLINE_LINUX="... cryptkey=rootfs:/root/cryptlvm.keyfile"<br />
<br />
Or, using the [[sd-encrypt]] hook:<br />
<br />
GRUB_CMDLINE_LINUX="... rd.luks.key=''device-UUID''=/root/cryptlvm.keyfile"<br />
<br />
If for some reason the keyfile fails to unlock the boot partition, systemd will fallback to ask for a passphrase to unlock and, in case that is correct, continue booting.<br />
<br />
{{Tip|If you want to encrypt the {{ic|/boot}} partition to protect against offline tampering threats, the [[dm-crypt/Specialties#mkinitcpio-chkcryptoboot|mkinitcpio-chkcryptoboot]] hook has been contributed to help.}}<br />
<br />
== Btrfs subvolumes with swap ==<br />
<br />
{{Out of date|Btrfs [[Btrfs#Swap_file|supports swapfile]] since 5.0|Talk:Dm-crypt/Encrypting_an_entire_system#Complete_guide_of_Btrfs_on_LUKS_full_disk_encryption}}<br />
The following example creates a full system encryption with LUKS1 using [[Btrfs]] subvolumes to [[Btrfs#Mounting subvolumes|simulate partitions]].<br />
<br />
If using UEFI, an [[EFI system partition]] (ESP) is required. {{ic|/boot}} itself may reside on {{ic|/}} and be encrypted; however, the ESP itself cannot be encrypted. In this example layout, the ESP is {{ic|/dev/sda1}} and is mounted at {{ic|/efi}}. {{ic|/boot}} itself is located on the system partition, {{ic|/dev/sda2}}.<br />
<br />
Since {{ic|/boot}} resides on the LUKS1 encrypted {{ic|/}}, [[GRUB]] must be used as the bootloader because only GRUB can load modules necessary to decrypt {{ic|/boot}} (e.g., crypto.mod, cryptodisk.mod and luks.mod).<br />
<br />
Additionally an optional plain-encrypted [[swap]] partition is shown.<br />
<br />
{{Text art|<nowiki><br />
+----------------------+----------------------+----------------------+<br />
| EFI system partition | System partition | Swap partition |<br />
| unencrypted | LUKS1-encrypted | plain-encrypted |<br />
| | | |<br />
| /efi | / | [SWAP] |<br />
| /dev/sda1 | /dev/sda2 | /dev/sda3 |<br />
|----------------------+----------------------+----------------------+<br />
</nowiki>}}<br />
<br />
=== Preparing the disk ===<br />
<br />
{{Note|It is not possible to use btrfs partitioning as described in [[Btrfs#Partitionless Btrfs disk]] when using LUKS. Traditional partitioning must be used, even if it is just to create one partition.}}<br />
<br />
Prior to creating any partitions, you should inform yourself about the importance and methods to securely erase the disk, described in [[dm-crypt/Drive preparation]]. If you are using [[UEFI]] create an [[EFI system partition]] with an appropriate size. It will later be mounted at {{ic|/efi}}. If you are going to create an encrypted swap partition, create the partition for it, but do '''not''' mark it as swap, since plain ''dm-crypt'' will be used with the partition.<br />
<br />
Create the needed partitions, at least one for {{ic|/}} (e.g. {{ic|/dev/sda2}}). See the [[Partitioning]] article.<br />
<br />
=== Preparing the system partition ===<br />
<br />
==== Create LUKS container ====<br />
<br />
{{Warning|GRUB's support for LUKS2 is limited; see [[GRUB#Encrypted /boot]] for details. Use LUKS1 ({{ic|1=cryptsetup luksFormat --type luks1}}) for partitions that GRUB will need to unlock.}}<br />
<br />
Follow [[dm-crypt/Device encryption#Encrypting devices with LUKS mode]] to setup {{ic|/dev/sda2}} for LUKS. See the [[dm-crypt/Device encryption#Encryption options for LUKS mode]] before doing so for a list of encryption options.<br />
<br />
==== Unlock LUKS container ====<br />
<br />
Now follow [[dm-crypt/Device encryption#Unlocking/Mapping LUKS partitions with the device mapper]] to unlock the LUKS container and map it.<br />
<br />
==== Format mapped device ====<br />
<br />
Proceed to format the mapped device as described in [[Btrfs#File system on a single device]], where {{ic|''/dev/partition''}} is the name of the mapped device (i.e., {{ic|cryptroot}}) and '''not''' {{ic|/dev/sda2}}.<br />
<br />
==== Mount mapped device ====<br />
<br />
Finally, [[mount]] the now-formatted mapped device (i.e., {{ic|/dev/mapper/cryptroot}}) to {{ic|/mnt}}.<br />
<br />
=== Creating btrfs subvolumes ===<br />
<br />
{{Merge|Btrfs|The subvolume layout is not specific to an encrypted system.}}<br />
<br />
==== Layout ====<br />
<br />
[[Btrfs#Subvolumes|Subvolumes]] will be used to simulate partitions, but other (nested) subvolumes will also be created. Here is a partial representation of what the following example will generate:<br />
<br />
{{Text art|<nowiki><br />
subvolid=5<br />
|<br />
├── @ -|<br />
| contained directories:<br />
| ├── /usr<br />
| ├── /bin<br />
| ├── /.snapshots<br />
| ├── ...<br />
|<br />
├── @home<br />
├── @snapshots<br />
├── @var_log<br />
└── @...<br />
</nowiki>}}<br />
<br />
This section follows the [[Snapper#Suggested filesystem layout]], which is most useful when used with [[Snapper]]. You should also consult [https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Layout Btrfs Wiki SysadminGuide#Layout].<br />
<br />
==== Create top-level subvolumes ====<br />
<br />
Here we are using the convention of prefixing {{ic|@}} to subvolume names that will be used as mount points, and {{ic|@}} will be the subvolume that is mounted as {{ic|/}}.<br />
<br />
Following the [[Btrfs#Creating a subvolume]] article, create subvolumes at {{ic|/mnt/@}}, {{ic|/mnt/@snapshots}}, and {{ic|/mnt/@home}}.<br />
<br />
Create any additional subvolumes you wish to use as mount points now.<br />
<br />
==== Create subvolumes for excludes ====<br />
<br />
Create any subvolumes you do '''not''' want to have snapshots of when taking a snapshot of {{ic|/}}. For example, you probably do not want to take snapshots of {{ic|/var/cache/pacman/pkg}}. These subvolumes will be nested under the {{ic|@}} subvolume, but just as easily could have been created earlier at the same level as {{ic|@}} according to your preference.<br />
<br />
Since the {{ic|@}} subvolume is mounted at {{ic|/mnt}} you will need to [[create a subvolume]] at {{ic|/mnt/var/cache/pacman/pkg}} for this example. You may have to create any parent directories first.<br />
<br />
Other directories you may wish to do this with are {{ic|/var/abs}}, {{ic|/var/tmp}}, and {{ic|/srv}}.<br />
<br />
==== Mount top-level subvolumes ====<br />
<br />
Unmount the system partition at {{ic|/mnt}}.<br />
<br />
Now mount the newly created {{ic|@}} subvolume which will serve as {{ic|/}} to {{ic|/mnt}} using the {{ic|1=subvol=}} mount option. Assuming the mapped device is named {{ic|cryptroot}}, the command would look like:<br />
<br />
# mount -o compress=zstd,subvol=@ /dev/mapper/cryptroot /mnt<br />
<br />
See [[Btrfs#Mounting subvolumes]] for more details.<br />
<br />
Also mount the other subvolumes to their respective mount points: {{ic|@home}} to {{ic|/mnt/home}} and {{ic|@snapshots}} to {{ic|/mnt/.snapshots}}.<br />
<br />
==== Mount ESP ====<br />
<br />
If you prepared an EFI system partition earlier, create its mount point and mount it now.<br />
<br />
{{Note|Btrfs snapshots will exclude {{ic|/efi}}, since it is not a btrfs file system.}}<br />
<br />
At the [[Installation guide#Install essential packages|pacstrap]] installation step, the {{Pkg|btrfs-progs}} must be installed in addition to the {{Pkg|base}} [[meta package]].<br />
<br />
=== Configuring mkinitcpio ===<br />
<br />
==== Create keyfile ====<br />
<br />
In order for GRUB to open the LUKS partition without having the user enter their passphrase twice, we will use a keyfile embedded in the initramfs. Follow [[dm-crypt/Device encryption#With a keyfile embedded in the initramfs]] making sure to add the key to {{ic|/dev/sda2}} at the ''luksAddKey'' step.<br />
<br />
==== Edit mkinitcpio.conf ====<br />
<br />
After creating, adding, and embedding the key as described above, add the {{ic|encrypt}} hook to [[mkinitcpio.conf]] as well as any other hooks you require. See [[dm-crypt/System configuration#mkinitcpio]] for detailed information.<br />
<br />
{{Tip|You may want to add {{ic|1=BINARIES=(/usr/bin/btrfs)}} to your {{ic|mkinitcpio.conf}}. See the [[Btrfs#Corruption recovery]] article.}}<br />
<br />
=== Configuring the boot loader ===<br />
<br />
Install [[GRUB]] to {{ic|/dev/sda}}. Then, edit {{ic|/etc/default/grub}} as instructed in the [[GRUB#Additional arguments]], [[GRUB#Encrypted /boot]] and [[dm-crypt/System configuration#Using encrypt hook]], following both the instructions for an encrypted root and boot partition. Finally, generate the GRUB configuration file.<br />
<br />
=== Configuring swap ===<br />
<br />
If you created a partition to be used for encrypted swap, now is the time to configure it. Follow the instructions at [[dm-crypt/Swap encryption]].<br />
<br />
== Root on ZFS ==<br />
<br />
Root on [[ZFS]] can be configured to encrypt everything except boot loader. See [https://openzfs.github.io/openzfs-docs/Getting%20Started/Arch%20Linux/Arch%20Linux%20Root%20on%20ZFS.html installation guide] on OpenZFS page.<br />
<br />
Boot loader can be verified with [[Secure Boot]] on UEFI-based systems.<br />
<br />
See also [[ZFS#Encryption in ZFS using dm-crypt]].</div>Recolichttps://wiki.archlinux.org/index.php?title=GnuPG&diff=657118GnuPG2021-04-01T09:28:23Z<p>Recolic: Add a tip for a common mistake while using gpg --export-ssh-key.</p>
<hr />
<div>[[Category:Encryption]]<br />
[[Category:Email]]<br />
[[Category:GNU]]<br />
[[de:GnuPG]]<br />
[[es:GnuPG]]<br />
[[ja:GnuPG]]<br />
[[ko:GnuPG]]<br />
[[ru:GnuPG]]<br />
[[pl:GnuPG]]<br />
[[zh-hans:GnuPG]]<br />
[[zh-hant:GnuPG]]<br />
{{Related articles start}}<br />
{{Related|pacman/Package signing}}<br />
{{Related|Data-at-rest encryption}}<br />
{{Related|List of applications/Security#Encryption, signing, steganography}}<br />
{{Related articles end}}<br />
<br />
According to the [https://www.gnupg.org/ official website]:<br />
<br />
:GnuPG is a complete and free implementation of the [http://openpgp.org/about/ OpenPGP] standard as defined by [https://tools.ietf.org/html/rfc4880 RFC4880] (also known as PGP). GnuPG allows you to encrypt and sign your data and communications; it features a versatile key management system, along with access modules for all kinds of public key directories. GnuPG, also known as GPG, is a command line tool with features for easy integration with other applications. A wealth of frontend applications and libraries are available. GnuPG also provides support for S/MIME and Secure Shell (ssh).<br />
<br />
== Installation ==<br />
<br />
[[Install]] the {{Pkg|gnupg}} package.<br />
<br />
This will also install {{Pkg|pinentry}}, a collection of simple PIN or passphrase entry dialogs which GnuPG uses for passphrase entry. The shell script {{ic|/usr/bin/pinentry}} determines which ''pinentry'' dialog is used, in the order described at [[#pinentry]].<br />
<br />
If you want to use a graphical frontend or program that integrates with GnuPG, see [[List of applications/Security#Encryption, signing, steganography]].<br />
<br />
== Configuration ==<br />
<br />
=== Directory location ===<br />
<br />
{{ic|$GNUPGHOME}} is used by GnuPG to point to the directory where its configuration files are stored. By default {{ic|$GNUPGHOME}} is not set and your {{ic|$HOME}} is used instead; thus, you will find a {{ic|~/.gnupg}} directory right after installation. <br />
<br />
To change the default location, either run gpg this way {{ic|$ gpg --homedir ''path/to/file''}} or set the {{ic|GNUPGHOME}} [[environment variable]].<br />
<br />
=== Configuration files ===<br />
<br />
The default configuration files are {{ic|~/.gnupg/gpg.conf}} and {{ic|~/.gnupg/dirmngr.conf}}. <br />
<br />
By default, the gnupg directory has its [[permissions]] set to {{ic|700}} and the files it contains have their permissions set to {{ic|600}}. Only the owner of the directory has permission to read, write, and access the files. This is for security purposes and should not be changed. In case this directory or any file inside it does not follow this security measure, you will get warnings about unsafe file and home directory permissions.<br />
<br />
Append to these files any long options you want. Do not write the two dashes, but simply the name of the option and required arguments. You will find skeleton files in {{ic|/usr/share/doc/gnupg/}}. These files are copied to {{ic|~/.gnupg}} the first time gpg is run if they do not exist there. Other examples are found in [[#See also]].<br />
<br />
Additionally, [[pacman]] uses a different set of configuration files for package signature verification. See [[Pacman/Package signing]] for details.<br />
<br />
=== Default options for new users ===<br />
<br />
If you want to setup some default options for new users, put configuration files in {{ic|/etc/skel/.gnupg/}}. When the new user is added in system, files from here will be copied to its GnuPG home directory. There is also a simple script called ''addgnupghome'' which you can use to create new GnuPG home directories for existing users:<br />
<br />
# addgnupghome user1 user2<br />
<br />
This will add the respective {{ic|/home/user1/.gnupg/}} and {{ic|/home/user2/.gnupg/}} and copy the files from the skeleton directory to it. Users with existing GnuPG home directory are simply skipped.<br />
<br />
== Usage ==<br />
{{Note|<br />
* Whenever a ''{{ic|user-id}}'' is required in a command, it can be specified with your key ID, fingerprint, a part of your name or email address, etc. GnuPG is flexible on this.<br />
* Whenever a {{ic|''key-id''}} is needed, it can be found adding the {{ic|1=--keyid-format=long}} flag to the command. To show the master secret key for example, run {{ic|1=gpg --list-secret-keys --keyid-format=long ''user-id''}}, the ''key-id'' is the hexadecimal hash provided on the same line as ''sec''.<br />
}}<br />
=== Create a key pair ===<br />
<br />
Generate a key pair by typing in a terminal:<br />
<br />
$ gpg --full-gen-key<br />
<br />
Also add the {{ic|--expert}} option to the command line to access more ciphers and in particular the newer ECC cipher ([[Wikipedia:Elliptic-curve cryptography]]).<br />
<br />
The command will prompt for answers to several questions. For general use most people will want: <br />
<br />
* The default ''RSA and RSA'' for sign and encrypt keys.<br />
* A keysize of the default 3072 value. A larger keysize of 4096 "gives us almost nothing, while costing us quite a lot" (see [https://www.gnupg.org/faq/gnupg-faq.html#no_default_of_rsa4096 why doesn’t GnuPG default to using RSA-4096]).<br />
* An expiration date: a period of one year is good enough for the average user. This way even if access is lost to the keyring, it will allow others to know that it is no longer valid. At a later stage, if necessary, the expiration date can be extended without having to re-issue a new key.<br />
* Your name and email address. You can add multiple identities to the same key later (''e.g.'', if you have multiple email addresses you want to associate with this key).<br />
* ''no'' optional comment. Since the semantics of the comment field are [https://lists.gnupg.org/pipermail/gnupg-devel/2015-July/030150.html not well-defined], it has limited value for identification.<br />
* A secure passphrase, find some guidelines in [[Security#Choosing secure passwords]].<br />
<br />
{{Note|The name and email address you enter here will be seen by anybody who imports your key.}}<br />
<br />
{{Tip|The simpler {{ic|--gen-key}} option uses default parameters for the key cipher, size and expiry and only asks for ''real name'' and ''email address''.}}<br />
<br />
=== List keys ===<br />
<br />
To list keys in your public key ring:<br />
<br />
$ gpg --list-keys<br />
<br />
To list keys in your secret key ring:<br />
<br />
$ gpg --list-secret-keys<br />
<br />
=== Export your public key ===<br />
<br />
GnuPG's main usage is to ensure confidentiality of exchanged messages via public-key cryptography. With it each user distributes the public key of their keyring, which can be used by others to encrypt messages to the user. The private key must ''always'' be kept private, otherwise confidentiality is broken. See [[Wikipedia:Public-key cryptography]] for examples about the message exchange. <br />
<br />
So, in order for others to send encrypted messages to you, they need your public key. <br />
<br />
To generate an ASCII version of a user's public key to file {{ic|''public.key''}} (e.g. to distribute it by e-mail):<br />
<br />
$ gpg --export --armor --output ''public.key'' ''user-id''<br />
<br />
Alternatively, or in addition, you can [[#Use a keyserver]] to share your key. <br />
<br />
{{Tip|<br />
* Add {{ic|--no-emit-version}} to avoid printing the version number, or add the corresponding setting to your configuration file.<br />
* You can omit the {{ic|user-id}} to export all public keys within your keyring. This is useful if you want to share multiple identities at once, or for importing in another application, e.g. [[Thunderbird#Use_OpenPGP_with_external_GnuPG|Thunderbird]].<br />
}}<br />
<br />
=== Import a public key ===<br />
<br />
In order to encrypt messages to others, as well as verify their signatures, you need their public key. To import a public key with file name {{ic|''public.key''}} to your public key ring:<br />
<br />
$ gpg --import ''public.key''<br />
<br />
Alternatively, [[#Use a keyserver]] to find a public key.<br />
<br />
If you wish to import a key ID to install a specific Arch Linux package, see [[pacman/Package signing#Managing the keyring]] and [[Makepkg#Signature checking]].<br />
<br />
=== Use a keyserver ===<br />
==== Sending keys ====<br />
You can register your key with a public PGP key server, so that others can retrieve it without having to contact you directly:<br />
<br />
$ gpg --send-keys ''key-id''<br />
<br />
{{Warning|Once a key has been submitted to a keyserver, it cannot be deleted from the server. The reason is explained in the [https://pgp.mit.edu/faq.html MIT PGP Public Key Server FAQ].}}<br />
{{Note|The associated email address, once published publicly, could be the target of spammers and in this case anti-spam filtering may be necessary.}}<br />
<br />
==== Searching and receiving keys ====<br />
To find out details of a key on the keyserver, without importing it, do:<br />
<br />
$ gpg --search-keys ''user-id''<br />
<br />
To import a key from a key server:<br />
<br />
$ gpg --recv-keys ''key-id''<br />
<br />
{{Warning|<br />
* You should verify the authenticity of the retrieved public key by comparing its fingerprint with one that the owner published on an independent source(s) (e.g., contacting the person directly). See [[Wikipedia:Public key fingerprint]] for more information.<br />
* It is recommended to use the long key ID or the full fingerprint when receiving a key. Using a short ID may encounter collisions. All keys will be imported that have the short ID, see [https://lkml.org/lkml/2016/8/15/445 fake keys found in the wild] for such example.<br />
}}<br />
<br />
{{Tip|Adding {{ic|auto-key-retrieve}} to {{ic|gpg.conf}} will automatically fetch keys from the key server as needed, but this can be considered a '''privacy violation'''; see "web bug" in {{man|1|gpg}}.}}<br />
<br />
==== Key servers ====<br />
<br />
The most common keyservers are:<br />
<br />
* [https://sks-keyservers.net SKS Keyserver Pool]: federated, no verification, keys cannot be deleted.<br />
* [https://keys.mailvelope.com Mailvelope Keyserver]: central, verification of email IDs, keys can be deleted.<br />
* [https://keys.openpgp.org keys.openpgp.org]: central, verification of email IDs, keys can be deleted, no third-party signatures (i.e. no Web of Trust support).<br />
<br />
More are listed at [[Wikipedia:Key server (cryptographic)#Keyserver examples]].<br />
<br />
An alternative key server can be specified with the {{ic|keyserver}} option in one of the [[#Configuration files]], for instance:<br />
{{hc|~/.gnupg/dirmngr.conf|<br />
keyserver hkp://pool.sks-keyservers.net<br />
}}<br />
A temporary use of another server is handy when the regular one does not work as it should. It can be achieved by, for example,<br />
<br />
$ gpg --keyserver hkps://keys.openpgp.org/ --search-keys 931FF8E79F0876134EDDBDCCA87FF9DF48BF1C90<br />
<br />
{{Tip|<br />
* If receiving fails with the message {{ic|gpg: keyserver receive failed: General error}}, and you use the default hkps keyserver pool, make sure set the HKPS pool verification certificate with {{ic|hkp-cacert /usr/share/gnupg/sks-keyservers.netCA.pem}} in your {{ic|dirmngr.conf}} and kill the old dirmngr process.<br />
* If your network blocks connection to port 11371 used for hkp, you may need to specify port 80, i.e. {{ic|pool.sks-keyservers.net:80}}. Alternatively, some HKPS servers provide access through port 443, for example, {{ic|hkps://hkps.pool.sks-keyservers.net:443}}.<br />
* If receiving fails with the message {{ic|gpg: keyserver receive failed: Connection refused}}, try using a different DNS server.<br />
* You can connect to the keyserver over [[Tor]] with [[Tor#Torsocks]]. Or using the {{ic|--use-tor}} command line option. See [https://gnupg.org/blog/20151224-gnupg-in-november-and-december.html] for more information.<br />
* You can connect to a keyserver using a proxy by setting the {{ic|http_proxy}} [[environment variable]] and setting {{ic|honor-http-proxy}} in {{ic|dirmngr.conf}}. Alternatively, set {{ic|http-proxy ''host[:port]''}} in the configuration file to override the environment variable of the same name. [[Restart]] the {{ic|dirmngr.service}} [[systemd/User|user service]] for the changes to take effect.}}<br />
<br />
=== Web Key Directory ===<br />
<br />
The Web Key Service (WKS) protocol is a new [https://datatracker.ietf.org/doc/draft-koch-openpgp-webkey-service/ standard] for key distribution, where the email domain provides its own key server called [https://wiki.gnupg.org/WKD Web Key Directory (WKD)]. When encrypting to an email address (e.g. {{ic|user@example.com}}), GnuPG (>=2.1.16) will query the domain ({{ic|example.com}}) via HTTPS for the public OpenPGP key if it is not already in the local keyring. The option {{ic|auto-key-locate}} will locate a key using the WKD protocol if there is no key on the local keyring for this email address.<br />
<br />
# gpg --recipient ''user@example.org'' --auto-key-locate --encrypt ''doc''<br />
<br />
See the [https://wiki.gnupg.org/WKD#Implementations GnuPG Wiki] for a list of email providers that support WKD. If you control the domain of your email address yourself, you can follow [https://wiki.gnupg.org/WKDHosting this guide] to enable WKD for your domain. To check if your key can be found in the WKD you can use [https://metacode.biz/openpgp/web-key-directory this webinterface].<br />
<br />
=== Encrypt and decrypt ===<br />
<br />
==== Asymmetric ====<br />
<br />
You need to [[#Import a public key]] of a user before encrypting (option {{ic|-e}}/{{ic|--encrypt}}) a file or message to that recipient (option {{ic|-r}}/{{ic|--recipient}}). Additionally you need to [[#Create a key pair]] if you have not already done so.<br />
<br />
To encrypt a file with the name ''doc'', use:<br />
<br />
$ gpg --recipient ''user-id'' --encrypt ''doc''<br />
<br />
To decrypt (option {{ic|-d}}/{{ic|--decrypt}}) a file with the name ''doc''.gpg encrypted with your public key, use:<br />
<br />
$ gpg --output ''doc'' --decrypt ''doc''.gpg<br />
<br />
''gpg'' will prompt you for your passphrase and then decrypt and write the data from ''doc''.gpg to ''doc''. If you omit the {{ic|-o}}/{{ic|--output}} option, ''gpg'' will write the decrypted data to stdout.<br />
<br />
{{Tip|<br />
* Add {{ic|--armor}} to encrypt a file using ASCII armor, suitable for copying and pasting a message in text format.<br />
* Use {{ic|-R ''user-id''}} or {{ic|--hidden-recipient ''user-id''}} instead of {{ic|-r}} to not put the recipient key IDs in the encrypted message. This helps to hide the receivers of the message and is a limited countermeasure against traffic analysis.<br />
* Add {{ic|--no-emit-version}} to avoid printing the version number, or add the corresponding setting to your configuration file.<br />
* You can use GnuPG to encrypt your sensitive documents by using your own user-id as recipient or by using the {{ic|--default-recipient-self}} flag; however, you can only do this one file at a time, although you can always tarball various files and then encrypt the tarball. See also [[Data-at-rest encryption#Available methods]] if you want to encrypt directories or a whole file-system.}}<br />
<br />
==== Symmetric ====<br />
<br />
Symmetric encryption does not require the generation of a key pair and can be used to simply encrypt data with a passphrase. Simply use {{ic|-c}}/{{ic|--symmetric}} to perform symmetric encryption:<br />
<br />
$ gpg -c ''doc''<br />
<br />
The following example:<br />
<br />
* Encrypts {{ic|''doc''}} with a symmetric cipher using a passphrase<br />
* Uses the AES-256 cipher algorithm to encrypt the passphrase<br />
* Uses the SHA-512 digest algorithm to mangle the passphrase<br />
* Mangles the passphrase for 65536 iterations<br />
<br />
$ gpg -c --s2k-cipher-algo AES256 --s2k-digest-algo SHA512 --s2k-count 65536 ''doc''<br />
<br />
To decrypt a symmetrically encrypted {{ic|''doc''.gpg}} using a passphrase and output decrypted contents into the same directory as {{ic|''doc''}} do:<br />
<br />
$ gpg --output ''doc'' --decrypt ''doc''.gpg<br />
<br />
==== Directory ====<br />
<br />
Encrypting/decrypting a directory can be done with {{man|1|gpgtar}}.<br />
<br />
Encrypt:<br />
$ gpgtar -c -o ''dir''.gpg ''dir''<br />
<br />
Decrypt:<br />
$ gpgtar -d ''dir''.gpg<br />
<br />
== Key maintenance ==<br />
<br />
=== Backup your private key ===<br />
<br />
To backup your private key do the following:<br />
<br />
$ gpg --export-secret-keys --armor --output ''privkey.asc'' ''user-id''<br />
<br />
Note the above command will require that you enter the passphrase for the key. This is because otherwise anyone who gains access to the above exported file would be able to encrypt and sign documents as if they were you ''without'' needing to know your passphrase. <br />
<br />
{{Warning|The passphrase is usually the weakest link in protecting your secret key. Place the private key in a safe place on a different system/device, such as a locked container or encrypted drive. It is the only safety you have to regain control to your keyring in case of, for example, a drive failure, theft or worse.}}<br />
<br />
To import the backup of your private key:<br />
<br />
$ gpg --import ''privkey.asc''<br />
<br />
{{Tip|[[Paperkey]] can be used to export private keys as human readable text or machine readable barcodes that can be printed on paper and archived.}}<br />
<br />
=== Backup your revocation certificate ===<br />
<br />
Revocation certificates are automatically generated for newly generated keys. These are by default located in {{ic|~/.gnupg/openpgp-revocs.d/}}. The filename of the certificate is the fingerprint of the key it will revoke.<br />
The revocation certificates can also be generated manually by the user later using:<br />
<br />
$ gpg --gen-revoke --armor --output ''revcert.asc'' ''user-id''<br />
<br />
This certificate can be used to [[#Revoke a key]] if it is ever lost or compromised. The backup will be useful if you have no longer access to the secret key and are therefore not able to generate a new revocation certificate with the above command. It is short enough to be printed out and typed in by hand if necessary.<br />
<br />
{{Warning|Anyone with access to the revocation certificate can revoke the key publicly, this action cannot be undone. Protect your revocation certificate like you protect your secret key.}}<br />
<br />
=== Edit your key ===<br />
<br />
Running the {{ic|gpg --edit-key ''user-id''}} command will present a menu which enables you to do most of your key management related tasks.<br />
<br />
Type {{ic|help}} in the edit key sub menu to show the complete list of commands. Some useful ones:<br />
<br />
> passwd # change the passphrase<br />
> clean # compact any user ID that is no longer usable (e.g revoked or expired)<br />
> revkey # revoke a key<br />
> addkey # add a subkey to this key<br />
> expire # change the key expiration time<br />
> adduid # add additional names, comments, and email addresses<br />
> addphoto # add photo to key (must be JPG, 240x288 recommended, enter full path to image when prompted)<br />
<br />
{{Tip|If you have multiple email accounts you can add each one of them as an identity, using {{ic|adduid}} command. You can then set your favourite one as {{ic|primary}}.}}<br />
<br />
=== Exporting subkey ===<br />
<br />
If you plan to use the same key across multiple devices, you may want to strip out your master key and only keep the bare minimum encryption subkey on less secure systems.<br />
<br />
First, find out which subkey you want to export.<br />
<br />
$ gpg --list-secret-keys --with-subkey-fingerprint<br />
<br />
Select only that subkey to export.<br />
<br />
$ gpg -a --export-secret-subkeys [subkey id]! > /tmp/subkey.gpg<br />
<br />
{{Warning|If you forget to add the !, all of your subkeys will be exported.}}<br />
<br />
At this point you could stop, but it is most likely a good idea to change the passphrase as well. Import the key into a temporary folder. <br />
<br />
$ gpg --homedir /tmp/gpg --import /tmp/subkey.gpg<br />
$ gpg --homedir /tmp/gpg --edit-key ''user-id''<br />
> passwd<br />
> save<br />
$ gpg --homedir /tmp/gpg -a --export-secret-subkeys ''[subkey id]''! > /tmp/subkey.altpass.gpg<br />
<br />
{{Note|You will get a warning that the master key was not available and the password was not changed, but that can safely be ignored as the subkey password was.}}<br />
<br />
At this point, you can now use {{ic|/tmp/subkey.altpass.gpg}} on your other devices.<br />
<br />
=== Extending expiration date ===<br />
<br />
{{Warning|'''Never''' delete your expired or revoked subkeys unless you have a good reason. Doing so will cause you to lose the ability to decrypt files encrypted with the old subkey. Please '''only''' delete expired or revoked keys from other users to clean your keyring.}}<br />
<br />
It is good practice to set an expiration date on your subkeys, so that if you lose access to the key (e.g. you forget the passphrase) the key will not continue to be used indefinitely by others. When the key expires, it is relatively straight-forward to extend the expiration date:<br />
<br />
$ gpg --edit-key ''user-id''<br />
> expire<br />
<br />
You will be prompted for a new expiration date, as well as the passphrase for your secret key, which is used to sign the new expiration date.<br />
<br />
Repeat this for any further subkeys that have expired:<br />
<br />
> key 1<br />
> expire<br />
<br />
Finally, save the changes and quit:<br />
<br />
> save<br />
<br />
Update it to a keyserver.<br />
<br />
$ gpg --keyserver keyserver.ubuntu.com --send-keys ''key-id''<br />
<br />
Alternatively, if you use this key on multiple computers, you can export the public key (with new signed expiration dates) and import it on those machines:<br />
<br />
$ gpg --export --output pubkey.gpg ''user-id''<br />
$ gpg --import pubkey.gpg<br />
<br />
There is no need to re-export your secret key or update your backups: the master secret key itself never expires, and the signature of the expiration date left on the public key and subkeys is all that is needed.<br />
<br />
=== Rotating subkeys ===<br />
<br />
{{Warning|'''Never''' delete your expired or revoked subkeys unless you have a good reason. Doing so will cause you to lose the ability to decrypt files encrypted with the old subkey. Please '''only''' delete expired or revoked keys from other users to clean your keyring.}}<br />
<br />
Alternatively, if you prefer to stop using subkeys entirely once they have expired, you can create new ones. Do this a few weeks in advance to allow others to update their keyring.<br />
<br />
{{Tip|You do not need to create a new key simply because it is expired. You can extend the expiration date, see the section [[#Extending expiration date]].}}<br />
<br />
Create new subkey (repeat for both signing and encrypting key)<br />
<br />
$ gpg --edit-key ''user-id''<br />
> addkey<br />
<br />
And answer the following questions it asks (see [[#Create a key pair]] for suggested settings).<br />
<br />
Save changes<br />
<br />
> save<br />
<br />
Update it to a keyserver.<br />
<br />
$ gpg --keyserver pgp.mit.edu --send-keys ''user-id''<br />
<br />
You will also need to export a fresh copy of your secret keys for backup purposes. See the section [[#Backup your private key]] for details on how to do this.<br />
<br />
{{Tip|Revoking expired subkeys is unnecessary and arguably bad form. If you are constantly revoking keys, it may cause others to lack confidence in you.}}<br />
<br />
=== Revoke a key ===<br />
Key revocation should be performed if the key is compromised, superseded, no longer used, or you forget your passphrase. This is done by merging the key with the revocation certificate of the key.<br />
<br />
If you have no longer access to your keypair, first [[#Import a public key]] to import your own key.<br />
Then, to revoke the key, import the file saved in [[#Backup your revocation certificate]]:<br />
<br />
$ gpg --import ''revcert.asc''<br />
<br />
Now the revocation needs to be made public. [[#Use a keyserver]] to send the revoked key to a public PGP server if you used one in the past, otherwise, export the revoked key to a file and distribute it to your communication partners.<br />
<br />
== Signatures ==<br />
<br />
Signatures certify and timestamp documents. If the document is modified, verification of the signature will fail. Unlike encryption which uses public keys to encrypt a document, signatures are created with the user's private key. The recipient of a signed document then verifies the signature using the sender's public key.<br />
<br />
=== Create a signature ===<br />
<br />
==== Sign a file ====<br />
<br />
To sign a file use the {{ic|-s}}/{{ic|--sign}} flag:<br />
<br />
$ gpg --output ''doc''.sig --sign ''doc''<br />
<br />
{{ic|''doc''.sig}} contains both the compressed content of the original file {{ic|''doc''}} and the signature in a binary format, but the file is not encrypted. However, you can combine signing with [[#Encrypt and decrypt|encrypting]].<br />
<br />
==== Clearsign a file or message ====<br />
<br />
To sign a file without compressing it into binary format use:<br />
<br />
$ gpg --output ''doc''.sig --clearsign ''doc''<br />
<br />
Here both the content of the original file {{ic|''doc''}} and the signature are stored in human-readable form in {{ic|''doc''.sig}}.<br />
<br />
==== Make a detached signature ====<br />
<br />
To create a separate signature file to be distributed separately from the document or file itself, use the {{ic|--detach-sig}} flag:<br />
<br />
$ gpg --output ''doc''.sig --detach-sig ''doc''<br />
<br />
Here the signature is stored in {{ic|''doc''.sig}}, but the contents of {{ic|''doc''}} are not stored in it. This method is often used in distributing software projects to allow users to verify that the program has not been modified by a third party.<br />
<br />
=== Verify a signature ===<br />
<br />
To verify a signature use the {{ic|--verify}} flag:<br />
<br />
$ gpg --verify ''doc''.sig<br />
<br />
where {{ic|''doc''.sig}} is the signed file containing the signature you wish to verify.<br />
<br />
If you are verifying a detached signature, both the signed data file and the signature file must be present when verifying. For example, to verify Arch Linux's latest iso you would do:<br />
<br />
$ gpg --verify archlinux-''version''.iso.sig<br />
<br />
where {{ic|archlinux-''version''.iso}} must be located in the same directory.<br />
<br />
You can also specify the signed data file with a second argument:<br />
<br />
$ gpg --verify archlinux-''version''.iso.sig ''/path/to/''archlinux-''version''.iso<br />
<br />
If a file has been encrypted in addition to being signed, simply [[#Encrypt and decrypt|decrypt]] the file and its signature will also be verified.<br />
<br />
== gpg-agent ==<br />
<br />
''gpg-agent'' is mostly used as daemon to request and cache the password for the keychain. This is useful if GnuPG is used from an external program like a mail client. {{Pkg|gnupg}} comes with [[systemd/User|systemd user]] sockets which are enabled by default. These sockets are {{ic|gpg-agent.socket}}, {{ic|gpg-agent-extra.socket}}, {{ic|gpg-agent-browser.socket}}, {{ic|gpg-agent-ssh.socket}}, and {{ic|dirmngr.socket}}.<br />
<br />
* The main {{ic|gpg-agent.socket}} is used by ''gpg'' to connect to the ''gpg-agent'' daemon.<br />
* The intended use for the {{ic|gpg-agent-extra.socket}} on a local system is to set up a Unix domain socket forwarding from a remote system. This enables to use ''gpg'' on the remote system without exposing the private keys to the remote system. See {{man|1|gpg-agent}} for details.<br />
* The {{ic|gpg-agent-browser.socket}} allows web browsers to access the ''gpg-agent'' daemon.<br />
* The {{ic|gpg-agent-ssh.socket}} can be used by [[SSH]] to cache [[SSH keys]] added by the ''ssh-add'' program. See [[#SSH agent]] for the necessary configuration.<br />
* The {{ic|dirmngr.socket}} starts a GnuPG daemon handling connections to keyservers.<br />
<br />
{{Note|If you use non-default GnuPG [[#Directory location]], you will need to [[edit]] all socket files to use the values of {{ic|gpgconf --list-dirs}}.}}<br />
<br />
=== Configuration ===<br />
<br />
gpg-agent can be configured via {{ic|~/.gnupg/gpg-agent.conf}} file. The configuration options are listed in {{man|1|gpg-agent}}. For example you can change cache ttl for unused keys:<br />
<br />
{{hc|~/.gnupg/gpg-agent.conf|<br />
default-cache-ttl 3600<br />
}}<br />
<br />
{{Tip|To cache your passphrase for the whole session, please run the following command:<br />
$ /usr/lib/gnupg/gpg-preset-passphrase --preset XXXXX<br />
<br />
where XXXXX is the keygrip. You can get its value when running {{ic|gpg --with-keygrip -K}}. The passphrase will be stored until {{ic|gpg-agent}} is restarted. If you set up {{ic|default-cache-ttl}} value, it will take precedence.<br />
}}<br />
<br />
=== Reload the agent ===<br />
<br />
After changing the configuration, reload the agent using ''gpg-connect-agent'':<br />
<br />
$ gpg-connect-agent reloadagent /bye<br />
<br />
The command should print {{ic|OK}}.<br />
<br />
However in some cases only the restart may not be sufficient, like when {{ic|keep-screen}} has been added to the agent configuration.<br />
In this case you firstly need to kill the ongoing gpg-agent process and then you can restart it as was explained above.<br />
<br />
=== pinentry ===<br />
<br />
{{ic|gpg-agent}} can be configured via the {{ic|pinentry-program}} stanza to use a particular {{Pkg|pinentry}} user interface when prompting the user for a passphrase. For example:<br />
{{hc|~/.gnupg/gpg-agent.conf|<br />
pinentry-program /usr/bin/pinentry-curses<br />
}}<br />
<br />
There are other pinentry programs that you can choose from - see {{ic|pacman -Ql pinentry {{!}} grep /usr/bin/}}.<br />
<br />
{{Tip|In order to use {{ic|/usr/bin/pinentry-kwallet}} you have to install the {{AUR|kwalletcli}} package.}}<br />
<br />
Remember to [[#Reload the agent|reload the agent]] after making changes to the configuration.<br />
<br />
=== Cache passwords ===<br />
<br />
{{ic|max-cache-ttl}} and {{ic|default-cache-ttl}} defines how many seconds gpg-agent should cache the passwords. To enter a password once a session, set them to something very high, for instance:<br />
<br />
{{hc|gpg-agent.conf|<br />
max-cache-ttl 60480000<br />
default-cache-ttl 60480000<br />
}}<br />
<br />
For password caching in SSH emulation mode, set {{ic|default-cache-ttl-ssh}} and {{ic|max-cache-ttl-ssh}} instead, for example:<br />
<br />
{{hc|gpg-agent.conf|<br />
default-cache-ttl-ssh 60480000<br />
max-cache-ttl-ssh 60480000<br />
}}<br />
<br />
=== Unattended passphrase ===<br />
<br />
Starting with GnuPG 2.1.0 the use of gpg-agent and pinentry is required, which may break backwards compatibility for passphrases piped in from STDIN using the {{ic|--passphrase-fd 0}} commandline option. In order to have the same type of functionality as the older releases two things must be done:<br />
<br />
First, edit the gpg-agent configuration to allow ''loopback'' pinentry mode:<br />
<br />
{{hc|~/.gnupg/gpg-agent.conf|<br />
allow-loopback-pinentry<br />
}}<br />
<br />
[[#Reload the agent|Reload the agent]] if it is running to let the change take effect.<br />
<br />
Second, either the application needs to be updated to include a commandline parameter to use loopback mode like so:<br />
<br />
$ gpg --pinentry-mode loopback ...<br />
<br />
...or if this is not possible, add the option to the configuration:<br />
<br />
{{hc|~/.gnupg/gpg.conf|<br />
pinentry-mode loopback<br />
}}<br />
<br />
{{Note|The upstream author indicates setting {{ic|pinentry-mode loopback}} in {{ic|gpg.conf}} may break other usage, using the commandline option should be preferred if at all possible. [https://bugs.g10code.com/gnupg/issue1772]}}<br />
<br />
=== SSH agent ===<br />
<br />
''gpg-agent'' has OpenSSH agent emulation. If you already use the GnuPG suite, you might consider using its agent to also cache your [[SSH keys]]. Additionally, some users may prefer the PIN entry dialog GnuPG agent provides as part of its passphrase management.<br />
<br />
==== Set SSH_AUTH_SOCK ====<br />
<br />
You have to set {{ic|SSH_AUTH_SOCK}} so that SSH will use ''gpg-agent'' instead of ''ssh-agent''. To make sure each process can find your ''gpg-agent'' instance regardless of e.g. the type of shell it is child of use [[Environment_variables#Using_pam_env|pam_env]].<br />
<br />
{{hc|~/.pam_environment|2=<br />
SSH_AGENT_PID DEFAULT=<br />
SSH_AUTH_SOCK DEFAULT="${XDG_RUNTIME_DIR}/gnupg/S.gpg-agent.ssh"<br />
}}<br />
<br />
{{Note|<br />
* If you set your {{ic|SSH_AUTH_SOCK}} manually (such as in this pam_env example), keep in mind that your socket location may be different if you are using a custom {{ic|GNUPGHOME}}. You can use the following bash example, or change {{ic|SSH_AUTH_SOCK}} to the value of {{ic|gpgconf --list-dirs agent-ssh-socket}}.<br />
* If GNOME Keyring is installed, it is necessary to [[GNOME/Keyring#Disable keyring daemon components|deactivate]] its ssh component. Otherwise, it will overwrite {{ic|SSH_AUTH_SOCK}}.<br />
}}<br />
<br />
Alternatively, depend on Bash. This works for non-standard socket locations as well:<br />
<br />
{{hc|~/.bashrc|2=<br />
unset SSH_AGENT_PID<br />
if [ "${gnupg_SSH_AUTH_SOCK_by:-0}" -ne $$ ]; then<br />
export SSH_AUTH_SOCK="$(gpgconf --list-dirs agent-ssh-socket)"<br />
fi<br />
}}<br />
<br />
{{Note|1=The test involving the {{ic|gnupg_SSH_AUTH_SOCK_by}} variable is for the case where the agent is started as {{ic|gpg-agent --daemon /bin/sh}}, in which case the shell inherits the {{ic|SSH_AUTH_SOCK}} variable from the parent, ''gpg-agent'' [http://git.gnupg.org/cgi-bin/gitweb.cgi?p=gnupg.git;a=blob;f=agent/gpg-agent.c;hb=7bca3be65e510eda40572327b87922834ebe07eb#l1307].}}<br />
<br />
==== Configure pinentry to use the correct TTY ====<br />
<br />
Also set the GPG_TTY and refresh the TTY in case user has switched into an X session as stated in {{man|1|gpg-agent}}. For example:<br />
<br />
{{hc|~/.bashrc|2=<br />
export GPG_TTY=$(tty)<br />
gpg-connect-agent updatestartuptty /bye >/dev/null<br />
}}<br />
<br />
==== Add SSH keys ====<br />
<br />
Once ''gpg-agent'' is running you can use ''ssh-add'' to approve keys, following the same steps as for [[SSH keys#ssh-agent|ssh-agent]]. The list of approved keys is stored in the {{ic|~/.gnupg/sshcontrol}} file. <br />
<br />
Once your key is approved, you will get a ''pinentry'' dialog every time your passphrase is needed. For password caching see [[#Cache passwords]].<br />
<br />
==== Using a PGP key for SSH authentication ====<br />
<br />
You can also use your PGP key as an SSH key. This requires a key with the {{ic|Authentication}} capability (see [[#Custom capabilities]]). There are various benefits gained by using a PGP key for SSH authentication, including:<br />
<br />
* Reduced key maintenance, as you will no longer need to maintain an SSH key.<br />
* The ability to store the authentication key on a smartcard. GnuPG will automatically detect the key when the card is available, and add it to the agent (check with {{ic|ssh-add -l}} or {{ic|ssh-add -L}}). The comment for the key should be something like: {{ic|openpgp:''key-id''}} or {{ic|cardno:''card-id''}}. <br />
<br />
To retrieve the public key part of your GPG/SSH key, run {{ic|gpg --export-ssh-key ''gpg-key''}}. If your key is authentication-capable but this command still fails with "Unusable public key", add a {{ic|!}} suffix ([https://dev.gnupg.org/T2957]). <br />
<br />
Unless you have your GPG key on a keycard, you need to add your key to {{ic|$GNUPGHOME/sshcontrol}} to be recognized as a SSH key. If your key is on a keycard, its keygrip is added to {{ic|sshcontrol}} implicitly. If not, get the keygrip of your key this way:<br />
<br />
{{hc|$ gpg --list-keys --with-keygrip|2=<br />
sub rsa4096 2018-07-25 [A]<br />
Keygrip = ''1531C8084D16DC4C36911F1585AF0ACE7AAFD7E7''<br />
}}<br />
<br />
Then edit {{ic|sshcontrol}} like this. Adding the keygrip is a one-time action; you will not need to edit the file again, unless you are adding additional keys.<br />
<br />
{{hc|$GNUPGHOME/sshcontrol|<br />
''1531C8084D16DC4C36911F1585AF0ACE7AAFD7E7''<br />
}}<br />
<br />
== Smartcards ==<br />
<br />
GnuPG uses ''scdaemon'' as an interface to your smartcard reader, please refer to the [[man page]] {{man|1|scdaemon}} for details.<br />
<br />
=== GnuPG only setups ===<br />
<br />
{{Note| To allow scdaemon direct access to USB smartcard readers the optional dependency {{Pkg|libusb-compat}} must be installed}}<br />
<br />
If you do not plan to use other cards but those based on GnuPG, you should check the {{Ic|reader-port}} parameter in {{ic|~/.gnupg/scdaemon.conf}}. The value '0' refers to the first available serial port reader and a value of '32768' (default) refers to the first USB reader.<br />
<br />
=== GnuPG with pcscd (PCSC Lite) ===<br />
<br />
{{man|8|pcscd}} is a daemon which handles access to smartcard (SCard API). If GnuPG's scdaemon fails to connect the smartcard directly (e.g. by using its integrated CCID support), it will fallback and try to find a smartcard using the PCSC Lite driver.<br />
<br />
To use pscsd [[install]] {{Pkg|pcsclite}} and {{Pkg|ccid}}. Then [[start]] and/or [[enable]] {{ic|pcscd.service}}. Alternatively start and/or enable {{ic|pcscd.socket}} to activate the daemon when needed.<br />
<br />
==== Always use pcscd ====<br />
<br />
If you are using any smartcard with an opensc driver (e.g.: ID cards from some countries) you should pay some attention to GnuPG configuration. Out of the box you might receive a message like this when using {{Ic|gpg --card-status}}<br />
<br />
gpg: selecting openpgp failed: ec=6.108<br />
<br />
By default, scdaemon will try to connect directly to the device. This connection will fail if the reader is being used by another process. For example: the pcscd daemon used by OpenSC. To cope with this situation we should use the same underlying driver as opensc so they can work well together. In order to point scdaemon to use pcscd you should remove {{Ic|reader-port}} from {{ic|~/.gnupg/scdaemon.conf}}, specify the location to {{ic|libpcsclite.so}} library and disable ccid so we make sure that we use pcscd:<br />
<br />
{{hc|~/.gnupg/scdaemon.conf|<nowiki><br />
pcsc-driver /usr/lib/libpcsclite.so<br />
card-timeout 5<br />
disable-ccid<br />
</nowiki>}}<br />
<br />
Please check {{man|1|scdaemon}} if you do not use OpenSC.<br />
<br />
==== Shared access with pcscd ====<br />
<br />
GnuPG {{ic|scdaemon}} is the only popular {{ic|pcscd}} client that uses {{ic|PCSC_SHARE_EXCLUSIVE}} flag when connecting to {{ic|pcscd}}. Other clients like OpenSC PKCS#11 that are used by browsers and programs listed in [[Electronic identification]] are using {{ic|PCSC_SHARE_SHARED}} that allows simultaneous access to single smartcard. {{ic|pcscd}} will not give exclusive access to smartcard while there are other clients connected. This means that to use GnuPG smartcard features you must before have to close all your open browser windows or do some other inconvenient operations. There is a out of tree patch in [https://github.com/GPGTools/MacGPG2/blob/dev/patches/gnupg/scdaemon_shared-access.patch GPGTools/MacGPG2] git repo that enables {{ic|scdaemon}} to use shared access but GnuPG developers are against allowing this because when one {{ic|pcscd}} client authenticates the smartcard then some other malicious {{ic|pcscd}} clients could do authenticated operations with the card without you knowing. You can read full mailing list thread [https://lists.gnupg.org/pipermail/gnupg-devel/2015-September/030247.html here].<br />
<br />
If you accept the security risk then you can use the patch from [https://github.com/GPGTools/MacGPG2/blob/dev/patches/gnupg/scdaemon_shared-access.patch GPGTools/MacGPG2] git repo or use {{AUR|gnupg-scdaemon-shared-access}} package. After patching your {{ic|scdaemon}} you can enable shared access by modifying your {{ic|scdaemon.conf}} file and adding {{ic|shared-access}} line end of it.<br />
<br />
===== Multi applet smart cards =====<br />
When using [[YubiKey]]s or other multi applet USB dongles with OpenSC PKCS#11 may run into problems where OpenSC switches your Yubikey from OpenPGP to PIV applet, breaking the {{ic|scdaemon}}. <br />
<br />
You can hack around the problem by forcing OpenSC to also use the OpenPGP applet. Open {{ic|/etc/opensc.conf}} file, search for Yubikey and change the {{ic|1=driver = "PIV-II";}} line to {{ic|1=driver = "openpgp";}}. If there is no such entry, use {{ic|pcsc_scan}}. Search for the Answer to Reset {{ic|ATR: 12 34 56 78 90 AB CD ...}}. Then create a new entry.<br />
<br />
{{hc|/etc/opensc.conf|2=<br />
...<br />
card_atr 12:23:34:45:67:89:ab:cd:... {<br />
name = "YubiKey Neo";<br />
driver = "openpgp"<br />
}<br />
...<br />
}}<br />
<br />
After that you can test with {{ic|pkcs11-tool -O --login}} that the OpenPGP applet is selected by default. Other PKCS#11 clients like browsers may need to be restarted for that change to be applied.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Different algorithm ===<br />
<br />
You may want to use stronger algorithms:<br />
<br />
{{hc|~/.gnupg/gpg.conf|<br />
...<br />
<br />
personal-digest-preferences SHA512<br />
cert-digest-algo SHA512<br />
default-preference-list SHA512 SHA384 SHA256 SHA224 AES256 AES192 AES CAST5 ZLIB BZIP2 ZIP Uncompressed<br />
personal-cipher-preferences TWOFISH CAMELLIA256 AES 3DES<br />
}}<br />
<br />
In the latest version of GnuPG, the default algorithms used are SHA256 and AES, both of which are secure enough for most people. However, if you are using a version of GnuPG older than 2.1, or if you want an even higher level of security, then you should follow the above step.<br />
<br />
=== Encrypt a password ===<br />
<br />
It can be useful to encrypt some password, so it will not be written in clear on a configuration file. A good example is your email password.<br />
<br />
First create a file with your password. You '''need''' to leave '''one''' empty line after the password, otherwise gpg will return an error message when evaluating the file.<br />
<br />
Then run:<br />
<br />
$ gpg -e -a -r ''user-id'' ''your_password_file''<br />
<br />
{{ic|-e}} is for encrypt, {{ic|-a}} for armor (ASCII output), {{ic|-r}} for recipient user ID.<br />
<br />
You will be left with a new {{ic|''your_password_file''.asc}} file.<br />
<br />
{{Tip|[[pass]] automates this process.}}<br />
<br />
=== Change trust model ===<br />
<br />
By default GnuPG uses the [[Wikipedia::Web of Trust|Web of Trust]] as the trust model. You can change this to [[Wikipedia::Trust on first use|Trust on first use]] by adding {{ic|1=--trust-model=tofu}} when adding a key or adding this option to your GnuPG configuration file. More details are in [https://lists.gnupg.org/pipermail/gnupg-devel/2015-October/030341.html this email to the GnuPG list].<br />
<br />
=== Hide all recipient id's ===<br />
<br />
By default the recipient's key ID is in the encrypted message. This can be removed at encryption time for a recipient by using {{ic|hidden-recipient ''user-id''}}. To remove it for all recipients add {{ic|throw-keyids}} to your configuration file. This helps to hide the receivers of the message and is a limited countermeasure against traffic analysis. (Using a little social engineering anyone who is able to decrypt the message can check whether one of the other recipients is the one he suspects.) On the receiving side, it may slow down the decryption process because all available secret keys must be tried (''e.g.'' with {{ic|--try-secret-key ''user-id''}}).<br />
<br />
=== Using caff for keysigning parties ===<br />
<br />
To allow users to validate keys on the keyservers and in their keyrings (i.e. make sure they are from whom they claim to be), PGP/GPG uses the [[Wikipedia::Web of Trust|Web of Trust]]. Keysigning parties allow users to get together at a physical location to validate keys. The [[Wikipedia:Zimmermann–Sassaman key-signing protocol|Zimmermann-Sassaman]] key-signing protocol is a way of making these very effective. [http://www.cryptnet.net/fdp/crypto/keysigning_party/en/keysigning_party.html Here] you will find a how-to article.<br />
<br />
For an easier process of signing keys and sending signatures to the owners after a keysigning party, you can use the tool ''caff''. It can be installed from the AUR with the package {{AUR|caff-git}}.<br />
<br />
To send the signatures to their owners you need a working [[Wikipedia:Message transfer agent|MTA]]. If you do not have already one, install [[msmtp]].<br />
<br />
=== Always show long ID's and fingerprints ===<br />
<br />
To always show long key ID's add {{ic|keyid-format 0xlong}} to your configuration file. To always show full fingerprints of keys, add {{ic|with-fingerprint}} to your configuration file.<br />
<br />
=== Custom capabilities ===<br />
<br />
For further customization also possible to set custom capabilities to your keys. The following capabilities are available:<br />
<br />
* Certify (only for master keys) - allows the key to create subkeys, mandatory for master keys.<br />
* Sign - allows the key to create cryptographic signatures that others can verify with the public key.<br />
* Encrypt - allows anyone to encrypt data with the public key, that only the private key can decrypt.<br />
* Authenticate - allows the key to authenticate with various non-GnuPG programs. The key can be used as e.g. an SSH key. <br />
<br />
It's possible to specify the capabilities of the master key, by running: <br />
<br />
$ gpg --full-generate-key --expert<br />
<br />
And select an option that allows you to set your own capabilities.<br />
<br />
Comparably, to specify custom capabilities for subkeys, add the {{ic|--expert}} flag to {{ic|gpg --edit-key}}, see [[#Edit your key]] for more information.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Not enough random bytes available ===<br />
<br />
When generating a key, gpg can run into this error:<br />
<br />
Not enough random bytes available. Please do some other work to give the OS a chance to collect more entropy!<br />
<br />
To check the available entropy, check the kernel parameters:<br />
<br />
$ cat /proc/sys/kernel/random/entropy_avail<br />
<br />
A healthy Linux system with a lot of entropy available will have return close to the full 4,096 bits of entropy. If the value returned is less than 200, the system is running low on entropy. <br />
<br />
To solve it, remember you do not often need to create keys and best just do what the message suggests (e.g. create disk activity, move the mouse, edit the wiki - all will create entropy). If that does not help, check which service is using up the entropy and consider stopping it for the time. If that is no alternative, see [[Random number generation#Alternatives]].<br />
<br />
=== su ===<br />
<br />
When using {{Ic|pinentry}}, you must have the proper permissions of the terminal device (e.g. {{Ic|/dev/tty1}}) in use. However, with ''su'' (or ''sudo''), the ownership stays with the original user, not the new one. This means that pinentry will fail with a {{ic|Permission denied}} error, even as root. If this happens when attempting to use ssh, an error like {{ic|sign_and_send_pubkey: signing failed: agent refused operation}} will be returned. The fix is to change the permissions of the device at some point before the use of pinentry (i.e. using gpg with an agent). If doing gpg as root, simply change the ownership to root right before using gpg:<br />
<br />
# chown root /dev/ttyN # where N is the current tty<br />
<br />
and then change it back after using gpg the first time. The equivalent is true with {{Ic|/dev/pts/}}.<br />
<br />
{{Note|The owner of tty ''must'' match with the user for which pinentry is running. Being part of the group {{Ic|tty}} '''is not''' enough.}}<br />
<br />
{{Tip|If you run gpg with {{ic|script}} it will use a new tty with the correct ownership:<br />
<br />
# script -q -c "gpg --gen-key" /dev/null<br />
}}<br />
<br />
=== Agent complains end of file ===<br />
<br />
If the pinentry program is {{ic|/usr/bin/pinentry-gnome3}}, it needs a DBus session bus to run properly. See [[General troubleshooting#Session permissions]] for details.<br />
<br />
Alternatively, you can use a variety of different options described in [[#pinentry]].<br />
<br />
=== KGpg configuration permissions ===<br />
<br />
There have been issues with {{Pkg|kgpg}} being able to access the {{ic|~/.gnupg/}} options. One issue might be a result of a deprecated ''options'' file, see the [https://bugs.kde.org/show_bug.cgi?id=290221 bug] report.<br />
<br />
=== GNOME on Wayland overrides SSH agent socket ===<br />
<br />
For Wayland sessions, {{Ic|gnome-session}} sets {{Ic|SSH_AUTH_SOCK}} to the standard gnome-keyring socket, {{Ic|$XDG_RUNTIME_DIR/keyring/ssh}}. This overrides any value set in {{Ic|~/.pam_environmment}} or systemd unit files.<br />
<br />
See [[GNOME/Keyring#Disable keyring daemon components]] on how to disable this behavior.<br />
<br />
=== mutt ===<br />
<br />
Mutt might not use ''gpg-agent'' correctly, you need to set an [[environment variable]] {{ic|GPG_AGENT_INFO}} (the content does not matter) when running mutt. Be also sure to enable password caching correctly, see [[#Cache passwords]].<br />
<br />
See [https://bbs.archlinux.org/viewtopic.php?pid=1490821#p1490821 this forum thread].<br />
<br />
=== "Lost" keys, upgrading to gnupg version 2.1 ===<br />
<br />
When {{ic|gpg --list-keys}} fails to show keys that used to be there, and applications complain about missing or invalid keys, some keys may not have been migrated to the new format.<br />
<br />
Please read [https://web.archive.org/web/20160502052025/http://jo-ke.name/wp/?p=111 GnuPG invalid packet workaround]. Basically, it says that there is a bug with keys in the old {{ic|pubring.gpg}} and {{ic|secring.gpg}} files, which have now been superseded by the new {{ic|pubring.kbx}} file and the {{ic|private-keys-v1.d/}} subdirectory and files. Your missing keys can be recovered with the following commands:<br />
<br />
$ cd<br />
$ cp -r .gnupg gnupgOLD<br />
$ gpg --export-ownertrust > otrust.txt<br />
$ gpg --import .gnupg/pubring.gpg<br />
$ gpg --import-ownertrust otrust.txt<br />
$ gpg --list-keys<br />
<br />
=== gpg hanged for all keyservers (when trying to receive keys) ===<br />
<br />
If gpg hanged with a certain keyserver when trying to receive keys, you might need to kill dirmngr in order to get access to other keyservers which are actually working, otherwise it might keeping hanging for all of them.<br />
<br />
=== Smartcard not detected ===<br />
<br />
Your user might not have the permission to access the smartcard which results in a {{ic|card error}} to be thrown, even though the card is correctly set up and inserted.<br />
<br />
One possible solution is to add a new group {{ic|scard}} including the users who need access to the smartcard.<br />
<br />
Then use [[udev rules]], similar to the following:<br />
<br />
{{hc|/etc/udev/rules.d/71-gnupg-ccid.rules|<nowiki><br />
ACTION=="add", SUBSYSTEM=="usb", ENV{ID_VENDOR_ID}=="1050", ENV{ID_MODEL_ID}=="0116|0111", MODE="660", GROUP="scard"<br />
</nowiki>}}<br />
<br />
One needs to adapt VENDOR and MODEL according to the {{ic|lsusb}} output, the above example is for a YubikeyNEO.<br />
<br />
=== server 'gpg-agent' is older than us (x < y) ===<br />
<br />
This warning appears if {{ic|gnupg}} is upgraded and the old gpg-agent is still running. [[Restart]] the ''user'''s {{ic|gpg-agent.socket}} (i.e., use the {{ic|--user}} flag when restarting).<br />
<br />
=== IPC connect call failed ===<br />
<br />
{{Accuracy|The {{ic|gpg-agent*.socket}} systemd sockets provided by the {{Pkg|gnupg}} package create the sockets in {{ic|/run/user/$UID/gnupg/}} which is guaranteed to be an appropriate file system.}}<br />
<br />
Make sure {{ic|gpg-agent}} and {{ic|dirmngr}} are not running with {{ic|killall gpg-agent dirmngr}} and the {{ic|$GNUPGHOME/crls.d/}} folder has permission set to {{ic|700}}.<br />
<br />
If your keyring is stored on a vFat filesystem (e.g. a USB drive), {{ic|gpg-agent}} will fail to create the required sockets (vFat does not support sockets), you can create redirects to a location that handles sockets, e.g. {{ic|/dev/shm}}:<br />
<br />
# export GNUPGHOME=/custom/gpg/home<br />
# printf '%%Assuan%%\nsocket=/dev/shm/S.gpg-agent\n' > $GNUPGHOME/S.gpg-agent<br />
# printf '%%Assuan%%\nsocket=/dev/shm/S.gpg-agent.browser\n' > $GNUPGHOME/S.gpg-agent.browser<br />
# printf '%%Assuan%%\nsocket=/dev/shm/S.gpg-agent.extra\n' > $GNUPGHOME/S.gpg-agent.extra<br />
# printf '%%Assuan%%\nsocket=/dev/shm/S.gpg-agent.ssh\n' > $GNUPGHOME/S.gpg-agent.ssh<br />
<br />
Test that gpg-agent starts successfully with {{ic|gpg-agent --daemon}}.<br />
<br />
=== Mitigating Poisoned PGP Certificates ===<br />
<br />
In June 2019, an unknown attacker spammed several high-profile PGP certificates with tens of thousands (or hundreds of thousands) of signatures (CVE-2019-13050) and uploaded these signatures to the SKS keyservers.<br />
The existence of these poisoned certificates in a keyring causes gpg to hang with the following message:<br />
<br />
gpg: removing stale lockfile (created by 7055)<br />
<br />
Possible mitigation involves removing the poisoned certificate as per this [https://tech.michaelaltfield.net/2019/07/14/mitigating-poisoned-pgp-certificates/ blog post].<br />
<br />
=== Invalid IPC response and Inappropriate ioctl for device ===<br />
<br />
The default pinentry program is {{ic|/usr/bin/pinentry-gtk-2}}. If {{Pkg|gtk2}} is unavailable, pinentry falls back to {{ic|/usr/bin/pinentry-curses}} and causes signing to fail:<br />
<br />
gpg: signing failed: Inappropriate ioctl for device<br />
gpg: [stdin]: clear-sign failed: Inappropriate ioctl for device<br />
<br />
You need to set the {{ic|GPG_TTY}} environment variable for the pinentry programs {{ic|/usr/bin/pinentry-tty}} and {{ic|/usr/bin/pinentry-curses}}.<br />
<br />
$ export GPG_TTY=$(tty)<br />
<br />
== See also ==<br />
<br />
* [https://gnupg.org/ GNU Privacy Guard Homepage]<br />
* [https://futureboy.us/pgp.html Alan Eliasen's GPG Tutorial]<br />
* [https://tools.ietf.org/html/rfc4880 RFC4880 "OpenPGP Message Format"]<br />
* [https://help.riseup.net/en/security/message-security/openpgp/gpg-best-practices gpg.conf recommendations and best practices]<br />
* [https://fedoraproject.org/wiki/Creating_GPG_Keys Creating GPG Keys (Fedora)]<br />
* [https://wiki.debian.org/Subkeys OpenPGP subkeys in Debian]<br />
* [https://github.com/lfit/itpol/blob/master/protecting-code-integrity.md Protecting code integrity with PGP]<br />
* [https://sanctum.geek.nz/arabesque/series/gnu-linux-crypto/ A more comprehensive gpg Tutorial]<br />
* [https://www.reddit.com/r/GPGpractice/ /r/GPGpractice - a subreddit to practice using GnuPG.]</div>Recolichttps://wiki.archlinux.org/index.php?title=CUPS/Troubleshooting&diff=653176CUPS/Troubleshooting2021-02-22T17:39:17Z<p>Recolic: HP1020 printer fails after upgrading from 2.3.3-3 to 2.3.3+106+ga72b0140e-1, adding this failure into CUPS knowledge base.</p>
<hr />
<div>[[Category:Printers]]<br />
[[ja:CUPS/トラブルシューティング]]<br />
[[ru:CUPS (Русский)/Troubleshooting]]<br />
{{Related articles start}}<br />
{{Related|CUPS}}<br />
{{Related|CUPS/Printer-specific problems}}<br />
{{Related articles end}}<br />
<br />
This article covers all non-specific (ie, not related to any one printer) troubleshooting of CUPS and printing drivers (but not problems related to printer sharing), including methods of determining the exact nature of the problem, and of solving the identified problem. <br />
<br />
== Debug log ==<br />
<br />
The best way to get printing working is to set 'LogLevel' in {{ic|/etc/cups/cupsd.conf}} to:<br />
LogLevel debug<br />
<br />
And then viewing the output from {{ic|/var/log/cups/error_log}} like this:<br />
# tail -n 100 -f /var/log/cups/error_log<br />
<br />
The characters at the left of the output stand for:<br />
*D=Debug<br />
*E=Error<br />
*I=Information<br />
*And so on<br />
<br />
These files may also prove useful:<br />
*{{ic|/var/log/cups/page_log}} - Echoes a new entry each time a print is successful<br />
*{{ic|/var/log/cups/access_log}} - Lists all cupsd http1.1 server activity<br />
<br />
Print a document and watch {{ic|error_log}} to get a more detailed and correct image of the printing process.<br />
<br />
== Problems resulting from upgrades ==<br />
<br />
''Issues that appeared after CUPS and related program packages underwent a version increment''<br />
<br />
=== CUPS stops working ===<br />
<br />
The chances are that a new configuration file is needed for the new version to work properly. Messages such as "404 - page not found" may result from trying to manage CUPS via localhost:631, for example.<br />
<br />
To use the new configuration, copy {{ic|/etc/cups/cupsd.conf.default}} to {{ic|/etc/cups/cupsd.conf}} (backup the old configuration if needed) and restart CUPS to employ the new settings.<br />
<br />
=== All jobs are "stopped" ===<br />
<br />
{{Accuracy|This seems a rather brute-force way of fixing this; maybe the printer is simply disabled?}}<br />
<br />
If all jobs sent to the printer become "stopped", delete the printer and add it again.<br />
Using the [http://localhost:631 CUPS web interface], go to Printers > Delete Printer.<br />
<br />
To check the printer's settings go to ''Printers'', then ''Modify Printer''. Copy down the information displayed, click 'Modify Printer' to proceed to the next page(s), and so on.<br />
<br />
=== All jobs are "The printer is not responding" ===<br />
<br />
On networked printers, you should check that the hostname in the printer's URI resolves to the printer's IP address via DNS, e.g. if your printer's connection looks like this:<br />
<br />
lpd://BRN_020554/BINARY_P1<br />
<br />
then the hostname 'BRN_020554' needs to resolve to the printer's IP from the server running CUPS. If [[Avahi]] is being used, ensure that [[Avahi#Hostname_resolution|Avahi's hostname resolution]] is working.<br />
<br />
Alternatively, replace the hostname used in the URI with the printer's IP address.<br />
<br />
=== Not recognizing USB printer ===<br />
<br />
You may notice all jobs are "Waiting for printer to become available" after upgrade, and your USB printer disappears from "Add New Printer" page, and {{ic|lpinfo -v}} shows "usb://unknown/printer". <br />
<br />
Please downgrade cups/libcups to 2.3.3-3, and hold these packages in {{ic|/etc/pacman.conf}}. You may get these packages in Archlinux archive at 2020-10-10. Some printer models may fail after upgrading to 2.3.3+106+ga72b0140e-1. <br />
<br />
=== The PPD version is not compatible with gutenprint ===<br />
<br />
Run:<br />
# /usr/bin/cups-genppdupdate<br />
<br />
And restart CUPS (as pointed out in gutenprint's post-install message).<br />
<br />
=== Issues Relating to Upgrade 2.3.3-3 -> 2.3.3+106+ga72b0140e-1 ===<br />
<br />
As a side-effect of switching Arch's CUPS upstream from Apple's senescent original to the actively-developed OpenPrinting fork in November 2020, the names of the CUPS systemd services were changed. The changes map as follows:<br />
<br />
*org.cups.cups-lpd.socket→cups-lpd.socket<br />
*org.cups.cups-lpd@.service→cups-lpd@.service<br />
*org.cups.cupsd.socket→cups.socket<br />
*org.cups.cupsd.service→cups.service<br />
*org.cups.cupsd.path→cups.path<br />
<br />
The CUPS install file for that upgrade recommends:<br />
<br />
Cups systemd socket and service files have been<br />
renamed by upstream decision. Please make sure<br />
to disable/reenable the services to your need.<br />
hint: "pacman -Ql cups | grep systemd" and<br />
"ls -lR /etc/systemd/ | grep cups"<br />
<br />
So, if one had org.cups.cupsd.service enabled, one would disable it with immediate effect ({{ic|#systemctl --now disable org.cups.cupsd.service}}) and enable its successor, also with immediate effect ({{ic|#systemctl --now enable cups.service}}).<br />
<br />
In addition to disabling the services under their own name and re-enabling them under the new, if you have made any non-standard modifications such as dropin files (e.g., {{ic|/etc/systemd/system/org.cups.cupsd.service.d}}) or adding the services as "Wants=" to target or other custom services, those will need to be moved over as well.<br />
<br />
== Networking issues ==<br />
<br />
=== Unable to locate printer ===<br />
<br />
Even if CUPS can detect networked printers, you may still end up with an "Unable to locate printer" error when trying to print something. The solution to this problem is to enable Avahi's [[Avahi#Hostname_resolution|.local hostname resolution]]. See [[CUPS#Network]] for details.<br />
<br />
This problem may also arise when you have a firewall. You may need to disable your firewall or set the right rules. Using system-config-printer to detect network printers will do that automatically.<br />
<br />
Similarly, being connected to a VPN may also cause CUPS to be unable to locate the printer. Disabling any VPN connections temporarily for printing can help fixing it.<br />
<br />
=== Old CUPS server ===<br />
<br />
As of CUPS version 1.6, the client defaults to IPP 2.0. If the server uses CUPS <= 1.5 / IPP <= 1.1, the client does not downgrade the protocol automatically and thus cannot communicate with the server. A workaround is to append the {{ic|1=version=1.1}} option documented at [https://www.cups.org/doc/network.html#TABLE2] to the URI.<br />
<br />
=== Unable to locate PPD file ===<br />
<br />
{{hc|/var/log/cups/error_log|Cannot connect to remote printer ipp://HP079676.local<br />
copy_model: empty PPD file}}<br />
<br />
Make sure [[Avahi]] is set up correctly. In particular, make sure {{Pkg|nss-mdns}} is installed and set up in {{ic|/etc/nsswitch.conf}}.<br />
<br />
=== Finding URIs for Windows print servers ===<br />
<br />
Sometimes Windows is a little less than forthcoming about exact device URIs (device locations). If having trouble specifying the correct device location in CUPS, run the following command to list all shares available to a certain windows username:<br />
<br />
$ smbtree -U ''windowsusername''<br />
<br />
This will list every share available to a certain Windows username on the local area network subnet, as long as Samba is set up and running properly. It should return something like this:<br />
<br />
{{bc| WORKGROUP<br />
\\REGULATOR-PC <br />
\\REGULATOR-PC\Z <br />
\\REGULATOR-PC\Public <br />
\\REGULATOR-PC\print$ Printer Drivers<br />
\\REGULATOR-PC\G <br />
\\REGULATOR-PC\EPSON Stylus CX8400 Series EPSON Stylus CX8400 Series<br />
}}<br />
<br />
What is needed here is first part of the last line, the resource matching the printer description. So to print to the EPSON Stylus printer, one would enter:<br />
<br />
smb://username:password@REGULATOR-PC/EPSON%20Stylus%20CX8400%20Series<br />
<br />
as the URI into CUPS.<br />
<br />
== USB printers ==<br />
<br />
=== Conflict with SANE ===<br />
<br />
If you are also running [[SANE]], it's possible that it is conflicting with CUPS. To fix this create a [[Udev]] rule marking the device as matched by libsane:<br />
{{hc|/etc/udev/rules.d/99-printer.rules|output=<br />
ATTRS{idVendor}=="''vendor id''", ATTRS{idProduct}=="''product id''", MODE="0664", GROUP="lp", ENV{libsane_matched}="yes"}}<br />
<br />
=== Conflict with usblp ===<br />
<br />
USB printers can be accessed using two methods: The usblp kernel module and libusb. The former is the classic way. It is simple: data is sent to the printer by writing it to a device file as a simple serial data stream. Reading the same device file allows bi-di access, at least for things like reading out ink levels, status, or printer capability information (PJL). It works very well for simple printers, but for multi-function devices (printer/scanner) it is not suitable and manufacturers like HP supply their own backends. Source: [http://lists.linuxfoundation.org/pipermail/printing-architecture/2012/002412.html here].<br />
<br />
{{Warning|As of {{Pkg|cups}} version 1.6.0, it should no longer be necessary to blacklist the {{ic|usblp}} kernel module.<br />
<br />
If you find out this is the only way to fix a remaining issue please report this upstream to the CUPS bug tracker and maybe also get in contact with Till Kamppeter (Debian CUPS maintainer). See [https://github.com/apple/cups/issues/4128 upstream bug] for more info.}}<br />
<br />
If you have problems getting your USB printer to work, you can try [[blacklisting]] the {{ic|usblp}} [[kernel module]]:<br />
<br />
{{hc|/etc/modprobe.d/blacklistusblp.conf|<br />
blacklist usblp<br />
}}<br />
<br />
Custom kernel users may need to manually load the {{ic|usbcore}} [[kernel module]] before proceeding.<br />
<br />
Once the modules are installed, plug in the printer and check if the kernel detected it by running the following:<br />
# journalctl -e<br />
or<br />
# dmesg<br />
<br />
If you are using {{ic|usblp}}, the output should indicate that the printer has been detected like so:<br />
Feb 19 20:17:11 kernel: printer.c: usblp0: USB Bidirectional<br />
printer dev 2 if 0 alt 0 proto 2 vid 0x04E8 pid 0x300E<br />
Feb 19 20:17:11 kernel: usb.c: usblp driver claimed interface cfef3920<br />
Feb 19 20:17:11 kernel: printer.c: v0.13: USB Printer Device Class driver<br />
<br />
If you blacklisted {{ic|usblp}}, you will see something like:<br />
usb 3-2: new full speed USB device using uhci_hcd and address 3<br />
usb 3-2: configuration #1 chosen from 1 choice<br />
<br />
=== USB autosuspend ===<br />
<br />
The Linux kernel automatically suspends USB devices when there is driver support and the devices are not in use. This can save power, but some USB printers think that they are disconnected when the kernel suspends the USB port, preventing printing. This can be fixed by deactivating autosuspend for the specific device, see [[Power management#USB autosuspend]].<br />
<br />
=== Bad permissions ===<br />
<br />
Check the permissions of the printer USB device. Get the bus and device number from {{ic|lsusb}}:<br />
<br />
{{hc| lsusb |<br />
Bus <BUSID> Device <DEVID>: ID <VENDOR>:<PRINTERID> Hewlett-Packard DeskJet D1360}}<br />
<br />
Check the ownership by looking in devfs:<br />
<br />
# ls -l /dev/bus/usb/<BUSID>/<DEVID><br />
<br />
The cups daemon runs as user "cups" and belongs to group "lp", so either this user or group needs read & write access to the USB device. If you think the permissions look wrong, you can change the group and permission temporarily:<br />
<br />
# chgrp lp /dev/bus/usb/<BUSID>/<DEVID><br />
# chmod 664 /dev/bus/usb/<BUSID>/<DEVID><br />
<br />
Then check if cups can now see the USB device correctly.<br />
<br />
To make a persistent permission change that will be triggered automatically each time the USB device is attached, add the following line:<br />
<br />
{{hc|/etc/udev/rules.d/10-local.rules|2=<br />
SUBSYSTEM=="usb", ATTRS{idVendor}=="<VENDOR>", ATTRS{idProduct}=="<PRINTERID>", GROUP:="lp", MODE:="0664"<br />
}}<br />
<br />
After editing, reload the udev rules with this command:<br />
<br />
# udevadm control --reload-rules<br />
<br />
Each system may vary, so consult [[udev#List the attributes of a device]] wiki page.<br />
<br />
== HP issues ==<br />
<br />
See also [[CUPS/Printer-specific problems#HP]].<br />
<br />
=== CUPS: "/usr/lib/cups/backend/hp failed" ===<br />
<br />
Try adding the printer as a Network Printer using the http:// protocol.<br />
<br />
{{Note|There might need to set permissions issues right.}}<br />
<br />
=== CUPS: Job is shown as complete but the printer does nothing ===<br />
<br />
This happens on HP printers when you select the (old) hpijs driver (e.g. the Deskjet D1600 series). Use the hpcups driver instead.<br />
<br />
Some HP printers require their firmware to be downloaded from the computer every time the printer is switched on. If there is an issue with udev (or equivalent) and the firmware download rule is never fired, you may experience this issue.<br />
As a workaround, you can manually download the firmware to the printer. Ensure the printer is plugged in and switched on, then run<br />
hp-firmware -n<br />
<br />
=== CUPS: '"foomatic-rip" not available/stopped with status 3' ===<br />
<br />
If receiving any of the following error messages in {{ic|/var/log/cups/error_log}} while using a HP printer, with jobs appearing to be processed while they all end up not being completed with their status set to 'stopped':<br />
Filter "foomatic-rip" for printer ''printer_name'' not available: No such file or director<br />
or:<br />
PID ''pid'' (/usr/lib/cups/filter/foomatic-rip) stopped with status 3!<br />
make sure {{pkg|hplip}} has been [[install]]ed.<br />
<br />
=== CUPS: "Filter failed" ===<br />
<br />
A "filter failed" error can be caused by any number of issues. The CUPS error log (by default {{ic|/var/log/cups/error_log}}) should record which filter failed and why.<br />
<br />
==== Missing ghostscript ====<br />
<br />
Install {{pkg|ghostscript}} ({{ic|/usr/lib/cups/filter/gstoraster}} needs it to run).<br />
<br />
==== Missing foomatic-db ====<br />
<br />
Install {{pkg|foomatic-db}} and {{pkg|foomatic-db-ppds}}. This fixes it in some cases.<br />
<br />
==== Avahi not enabled ====<br />
<br />
[[Start]], and [[enable]] the {{ic|avahi-daemon}} service.<br />
<br />
==== Out-of-date plugin ====<br />
<br />
This error can also indicate that the plugin is out of date (version is mismatched) and may occur after a system upgrade, possibly showing up as a {{ic|Plugin error}} message in the logs.<br />
If you have installed {{AUR|hplip-plugin}} you will need to update the package, otherwise re-run {{ic|hp-setup -i}} to install the latest version of the plugin.<br />
<br />
==== Outdated printer configuration ====<br />
<br />
As of {{AUR|hplip-plugin}} v3.17.11 hpijs is not longer available. If you have printers using hpijs they will fail to print. You must modify them and select the new hpcups driver instead.<br />
<br />
You can check if this is your case looking at cups error_log:<br />
<br />
{{hc| $ grep hpijs /var/log/cups/error_log |<br />
...<br />
D [09/Jan/2018:14:32:58 +0000] [Job 97] '''sh: hpijs: command not found'''<br />
...}}<br />
<br />
==== Client and host both run CUPS with hpcups ====<br />
<br />
{{Note|The following issue has been described on FreeBSD forum. [https://forums.freebsd.org/threads/filter-failed-cups-hp-psc-2350-series.60222/ Read more here].}}<br />
<br />
A bug seems to affect CUPS when a host shares a physically connected HP printer using hpcups drivers from {{Pkg|hplip}}, and a client adds the shared printer in is own CUPS server through [[Wikipedia:Internet_Printing_Protocol|IPP]], using hpcups driver too. On every attempt to print a page from the client, the jobs page from the client returns indefinitly ''"Sending data to printer"'' while the same page from the host returns ''"Filter failed"''. It appears that the job runs through the CUPS filter twice: a first time on client-side, and a second time on host-side, which makes it fails on host-side. The same bug should not be observed when printing from a Windows client, or when printing directly on the host. There are some workarounds here (use only one method):<br />
* Use '''Generic IPP Everywhere Printer''' driver on the client. When selecting the driver in the CUPS Web Interface, you should find it in the ''Generic'' manufacturer.<br />
* Modify the '''PPD used on the client side''' so the job does not goes through the filter client-side. Find the right PPD in {{ic|/usr/share/ppd/HP}} and copy it in your home directory. Edit the copy : replace the line {{ic|*cupsFilter: "application/vnd.cups-raster 0 hpcups"}} with {{ic|*cupsFilter: "*/* 0 -"}}. Now, add your printer on the client CUPS, selecting your custom PPD located in your home directory.<br />
* Create a '''raw queue''' on the host: when you add the printer in the CUPS interface of the host, do not select the specific PPD of your printer, but choose ''Raw queue'' from ''Raw'' manufacturer. You should be able to add this shared printer on the client, using this time the specific PPD of the printer. With this method, the host is not able to print directly a document because it doesn't run the filter. However, if the host is a small headless embedded device such as a Raspberry Pi, you might notice an important decrease of the response time with this method compared to the two previous ones, especially with large documents, because it saves a lot a CPU usage.<br />
<br />
=== CUPS: prints only an empty and an error-message page on HP LaserJet ===<br />
<br />
{{Out of date|The bug was reported in 2012; is this still an issue?}}<br />
<br />
There is a bug that causes CUPS to fail when printing images on HP LaserJet (in my case 3380). The bug has been reported and fixed by [https://bugs.launchpad.net/ubuntu/+source/cups-filters/+bug/998087 Ubuntu].<br />
The first page is empty, the second page contains the following error message:<br />
ERROR:<br />
invalidaccess<br />
OFFENDING COMMAND:<br />
filter<br />
STACK:<br />
/SubFileDecode<br />
endstream<br />
...<br />
<br />
In order to fix the issue, run the following command as root:<br />
# lpadmin -p ''printer'' -o pdftops-renderer-default=pdftops<br />
<br />
=== CUPS: "File "/usr/lib/cups/filter/rastertospl" not available ===<br />
<br />
After the printer is connected by other means to the network, setting up the HP 107w Laser printer is possible through the CUPS web interface; but this error prevents printing.<br />
<br />
It seems that support for this printer is not provided by hplip. However, drivers can be installed using HP's install scripts and PPD file found at the HP [https://support.hp.com/us-en/drivers/selfservice/hp-laser-100-printer-series/24494339/model/24494342 downloads] page.<br />
<br />
Extract the .zip and read this [https://gist.github.com/taniwallach/f1f6c81ce19b7d68f74d4b71d1db57a2 gist] for further details and instructions.<br />
<br />
=== HPLIP 3.13: Plugin is installed, but HP Device Manager complains it is not ===<br />
<br />
The issue might have to do with the file permission change that had been made to {{ic|/var/lib/hp/hplip.state}}. To correct the issue, a simple {{ic|chmod 644 /var/lib/hp/hplip.state}} and {{ic|chmod 755 /var/lib/hp}} should be sufficient. For further information, please read this [https://bugs.launchpad.net/hplip/+bug/1131596 link].<br />
<br />
=== hp-toolbox: "Unable to communicate with device" ===<br />
<br />
# hp-toolbox<br />
# error: Unable to communicate with device (code=12): hp:/usb/''printer id''<br />
<br />
==== Virtual CDROM printers ====<br />
<br />
This can also be caused by printers such as the P1102 that provide a virtual CD-ROM drive for MS Windows drivers. The lp dev appears and then disappears. In that case, try the '''usb-modeswitch''' and '''usb-modeswitch-data''' packages, that lets one switch off the "Smart Drive" (udev rules included in said packages).<br />
<br />
==== Networked printers ====<br />
<br />
This can also occur with network attached printers using dynamic hostnames if the [[Avahi|avahi-daemon]] is not running. Another possibility is that ''hp-setup'' failed to locate the printer because the IP address of the the printer changed due to DHCP. If this is the case, consider adding a DHCP reservation for the printer in the DHCP server's configuration.<br />
<br />
=== hp-setup asks to specify the PPD file for the discovered printer ===<br />
<br />
Furthermore, when selecting a PPD file in hp-setup's graphical mode, the field does not update and no error message is shown.<br />
<br />
Or, if in interactive (console) mode, you may encounter something similar to this even when providing a correct path to a valid ppd file:<br />
<br />
Please enter the full filesystem path to the PPD file to use (q=quit) :/usr/share/ppd/HP/hp-deskjet_2050_j510_series.ppd.gz<br />
Traceback (most recent call last):<br />
File "/usr/bin/hp-setup", line 536, in <module><br />
desc = nickname_pat.search(nickname).group(1)<br />
TypeError: cannot use a string pattern on a bytes-like object<br />
<br />
The solution is to install and start {{pkg|cups}} before running {{ic|hp-setup}}.<br />
<br />
=== hp-setup: "Qt/PyQt 4 initialization failed" ===<br />
<br />
[[Install]] {{AUR|python-pyqt4}}, which is an optdepend of {{Pkg|hplip}}. Alternatively, to run hp-setup with the command line interface, use the {{ic|-i}} flag.<br />
<br />
=== hp-setup: finds the printer automatically but reports "Unable to communicate with device" when printing test page immediately afterwards ===<br />
<br />
This at least happens to hplip 3.13.5-2 for HP Officejet 6500A through local network connection. To solve the problem, specify the IP address of the HP printer for hp-setup to locate the printer.<br />
<br />
=== hp-setup: "KeyError: 'family-class'" ===<br />
<br />
If adding a printer fails silently in the UI or you receive a {{ic|KeyError: 'family-class'}} traceback from {{ic|hp-setup}}, the {{ic|/usr/share/hplip/data/models/models.dat}} may need to be manually updated.<br />
Check if {{ic|1=family-class=Undefined}} is defined the section for your printer, if not add it:<br />
{{hc|head=/usr/share/hplip/data/models/models.dat|output=<br />
[hp_laserjet_pro_mfp_m225dw]<br />
...<br />
family-class=Undefined<br />
}}<br />
<br />
== Other ==<br />
<br />
=== Printer "Paused" or "Stopped" with Status "Rendering completed" ===<br />
<br />
==== Low ink ====<br />
<br />
When low on ink, some printers will get stuck in "Rendering completed" status and, if it is a network printer, the printer may even become unreachable from CUPS' perspective despite being properly connected to the network. Replacing the low/depleted ink cartridge(s) in this setting will return the printer to "Ready" status and, if it is a network printer, will make the printer available to CUPS again.<br />
<br />
{{Note|If you use third-party ink cartridges, the ink levels reported by the printer may be inaccurate. If you use third-party ink and your printer used to work fine but is now getting stuck on "Rendering completed" status, replace the ink cartridges regardless of the reported ink levels before trying other fixes.}}<br />
<br />
=== Printing fails with unauthorised error ===<br />
<br />
If a remote printer requests authentication CUPS will automatically add an {{ic|AuthInfoRequired}} directive to the printer in {{ic|/etc/cups/printers.conf}}. However, some graphical applications (for instance, some versions of [[LibreOffice]] [https://bugs.documentfoundation.org/show_bug.cgi?id=53029]) have no way to prompt for credentials, so printing fails.<br />
To fix this include the required username and password in the URI.<br />
See [https://bugs.launchpad.net/ubuntu/+source/cups/+bug/283811], [https://bbs.archlinux.org/viewtopic.php?id=61826].<br />
<br />
=== Unknown supported format: application/postscript ===<br />
<br />
Comment the lines:<br />
application/octet-stream application/vnd.cups-raw 0 -<br />
from {{ic|/etc/cups/mime.convs}}, and:<br />
application/octet-stream<br />
in {{ic|/etc/cups/mime.types}}.<br />
<br />
=== Print-Job client-error-document-format-not-supported ===<br />
<br />
Try installing the foomatic packages and use a foomatic driver.<br />
<br />
=== Unable to get list of printer drivers ===<br />
(Also applicable to error "-1 not supported!")<br />
<br />
Try to remove Foomatic drivers or refer to [[CUPS/Printer-specific problems#HPLIP]] for a workaround.<br />
<br />
=== lp: Error - Scheduler Not Responding ===<br />
<br />
If you get this error, ensure [[CUPS]] is running, the environmental variable {{ic|CUPS_SERVER}} is unset, and that {{ic|/etc/cups/client.conf}} is correct.<br />
<br />
=== "Using invalid Host" error message ===<br />
<br />
Try adding {{ic|ServerAlias *}} into {{ic|/etc/cups/cupsd.conf}}.<br />
<br />
=== Cannot print from LibreOffice ===<br />
<br />
If you can print a test page from the [[CUPS]] web interface, but not from [[LibreOffice]], try to [[install]] the {{Pkg|a2ps}} package.<br />
<br />
=== Printer output shifted ===<br />
<br />
This seems to be caused by the wrong page size being set in [[CUPS]].<br />
<br />
=== Printer becomes stuck after a problem ===<br />
<br />
When an issue arises during printing, the printer in CUPS may become unresponsive. {{ic|lpq}} reports that the printer {{ic|is not ready}}, and it can be reactivated using {{ic|cupsenable}}. In the CUPS web interface, the printer is shown as ''Paused'', and can be reactivated by ''resuming'' the printer.<br />
<br />
To automatically have CUPS reactivate the printer, change [https://www.cups.org/doc/man-cupsd.conf.html?TOPIC=Man+Pages#ErrorPolicy ErrorPolicy] from the default {{ic|stop-printer}} to {{ic|retry-current-job}}.<br />
<br />
=== Samsung: URF ERROR - Incomplete Session by time out ===<br />
<br />
This error is usually encountered when printing files over the network through IPP to a Samsung printer, and is solved by using the {{aur|samsung-unified-driver}} package.<br />
<br />
{{Note|The corresponding error code 11-1112 corresponds to an internal wiring problem with the printer, so contacting Samsung's tech support is futile.}}<br />
<br />
=== Brother: Printer prints multiple copies ===<br />
<br />
Sometimes the printer will print multiple copies of a document (for instance a MFC-9330CDW printed 10 copies). The solution is to [[CUPS/Printer-specific problems#Updating the firmware|update the printer firmware]].<br />
<br />
=== Regular user cannot change properties of the printer or remove certain jobs ===<br />
<br />
If a regular user needs to be able to change the printers properties or manage the printer queue, the user may need to be added to the {{ic|sys}} group.<br />
<br />
=== Cannot login into web interface ===<br />
<br />
Check if there is more than one {{ic|cupsd}} process running. If this is the case then stop {{ic|cups}} service, kill all processes named {{ic|cupsd}} and start {{ic|cups}} service again.</div>Recolichttps://wiki.archlinux.org/index.php?title=NetworkManager&diff=500269NetworkManager2017-12-01T15:19:00Z<p>Recolic: It took me 2 hours to check what happened to my NetworkManager(It doesn't scan/list wifi). I've read all possible solutions from wiki/google/manual but didn't help. Before I posting a new question, I glanced at my NetworkManager.conf(I've never edited it</p>
<hr />
<div>[[Category:Network configuration]]<br />
[[cs:NetworkManager]]<br />
[[de:Networkmanager]]<br />
[[es:NetworkManager]]<br />
[[fr:NetworkManager]]<br />
[[it:NetworkManager]]<br />
[[ja:NetworkManager]]<br />
[[pt:NetworkManager]]<br />
[[ru:NetworkManager]]<br />
[[tr:NetworkManager]]<br />
[[zh-hans:NetworkManager]]<br />
{{Related articles start}}<br />
{{Related|Network configuration}}<br />
{{Related|Wireless network configuration}}<br />
{{Related|:Category:Network configuration}}<br />
{{Related articles end}}<br />
[http://projects.gnome.org/NetworkManager/ NetworkManager] is a program for providing detection and configuration for systems to automatically connect to network. NetworkManager's functionality can be useful for both wireless and wired networks. For wireless networks, NetworkManager prefers known wireless networks and has the ability to switch to the most reliable network. NetworkManager-aware applications can switch from online and offline mode. NetworkManager also prefers wired connections over wireless ones, has support for modem connections and certain types of VPN. NetworkManager was originally developed by Red Hat and now is hosted by the [[GNOME]] project.<br />
<br />
{{Warning|By default, Wi-Fi passwords are stored in clear text. See section [[#Encrypted Wi-Fi passwords]]}}<br />
<br />
== Installation ==<br />
<br />
NetworkManager can be [[install]]ed with the package {{Pkg|networkmanager}}. The package does not include the tray applet ''nm-applet'' which is part of the {{Pkg|network-manager-applet}}. It has functionality for basic DHCP support. For full featured DHCP and if you require IPv6 support, {{Pkg|dhclient}} integrates it. <br />
<br />
{{Note|You must ensure that no other service that wants to configure the network is running; in fact, multiple networking services will conflict. You can find a list of the currently running services with {{ic|1=systemctl --type=service}} and then [[stop]] them. See [[#Configuration]] to enable the NetworkManager service.}}<br />
<br />
=== VPN support ===<br />
<br />
NetworkManager VPN support is based on a plug-in system. If you need VPN support via NetworkManager, you have to install one of the following packages:<br />
<br />
* {{App|NetworkManager-openconnect|Connect to Cisco AnyConnect, Juniper VPNs.|https://git.gnome.org/browse/network-manager-openconnect|{{Pkg|networkmanager-openconnect}}}}<br />
* {{App|NetworkManager-openvpn|Connect to OpenVPN VPNs.|https://git.gnome.org/browse/network-manager-openvpn|{{Pkg|networkmanager-openvpn}}}}<br />
* {{App|NetworkManager-pptp|Connect to PPTP VPNs, Microsoft compatible.|https://git.gnome.org/browse/network-manager-pptp|{{Pkg|networkmanager-pptp}}}}<br />
* {{App|NetworkManager-vpnc|Connect to IPsec VPNs, Cisco compatible.|https://git.gnome.org/browse/network-manager-vpnc|{{Pkg|networkmanager-vpnc}}}}<br />
* {{App|NetworkManager-strongswan|Connect to IKEv2 IPsec VPNs with support for EAP, PSK and certificate authentication.|https://wiki.strongswan.org/projects/strongswan/wiki/NetworkManager|{{Pkg|networkmanager-strongswan}}}}<br />
* {{App|NetworkManager-fortisslvpn|Connect to Fortinet SSLVPN VPNs.|https://git.gnome.org/browse/network-manager-fortisslvpn|{{AUR|networkmanager-fortisslvpn-git}}}}<br />
* {{App|NetworkManager-iodine|Tunnel IP traffic via DNS using Iodine.|https://honk.sigxcpu.org/piki/projects/network-manager-iodine/|{{AUR|networkmanager-iodine-git}}}}<br />
* {{App|NetworkManager-libreswan|Connect to IPsec IKEv1 VPNs, Cisco compatible.|https://git.gnome.org/browse/network-manager-libreswan|{{AUR|networkmanager-libreswan}}}}<br />
* {{App|NetworkManager-l2tp|L2TP compatible VPN plugin .|https://github.com/nm-l2tp/network-manager-l2tp|{{AUR|networkmanager-l2tp}}}}<br />
* {{App|NetworkManager-ssh|Connect using OpenSSH's Tunnel capability.|https://github.com/danfruehauf/NetworkManager-ssh|{{AUR|networkmanager-ssh-git}}}}<br />
* {{App|NetworkManager-sstp|SSTP compatible VPN plugin.|http://sstp-client.sourceforge.net/#Network_Manager_Plugin|{{AUR|networkmanager-sstp}}}}<br />
<br />
{{Warning|1=VPN support is [https://bugzilla.gnome.org/buglist.cgi?quicksearch=networkmanager%20vpn unstable], check the daemon processes options set via the GUI correctly and double-check with each package release.[https://bugzilla.gnome.org/show_bug.cgi?id=755350]}}<br />
<br />
=== PPPoE / DSL support ===<br />
<br />
[[Install]] {{pkg|rp-pppoe}} for PPPoE / DSL connection support.<br />
<br />
== Front-ends ==<br />
<br />
To configure and have easy access to NetworkManager, most users will want to install an applet. This GUI front-end usually resides in the system tray (or notification area) and allows network selection and configuration of NetworkManager. Various desktop environments have their own applet. Otherwise you can use [[#nm-applet]].<br />
<br />
=== GNOME ===<br />
<br />
[[GNOME]] has a built-in tool, accessible from the Network settings.<br />
<br />
=== KDE Plasma ===<br />
<br />
[[Install]] the {{Pkg|plasma-nm}} package.<br />
<br />
=== nm-applet ===<br />
<br />
{{Pkg|network-manager-applet}} is a GTK+ 3 front-end which works under Xorg environments with a systray.<br />
<br />
To store connection secrets install and configure [[GNOME/Keyring]].<br />
<br />
Be aware that after enabling the tick-box option {{ic|Make available to other users}} for a connection, NetworkManager stores the password in plain-text, though the respective file is accessible only to root (or other users via {{ic|nm-applet}}). See [[#Encrypted Wi-Fi passwords]].<br />
<br />
In order to run {{ic|nm-applet}} without a systray, you can use {{Pkg|trayer}} or {{Pkg|stalonetray}}. For example, you can add a script like this one in your path:<br />
<br />
{{hc|nmgui|<nowiki><br />
#!/bin/sh<br />
nm-applet 2>&1 > /dev/null &<br />
stalonetray 2>&1 > /dev/null<br />
killall nm-applet<br />
</nowiki>}}<br />
<br />
When you close the ''stalonetray'' window, it closes {{ic|nm-applet}} too, so no extra memory is used once you are done with network settings.<br />
<br />
The applet can show notifications for events such as connecting to or disconnecting from a WiFi network. For these notifications to display, ensure that you have a notification server installed - see [[Desktop notifications]]. If you use the applet without a notification server, you might see some messages in stdout/stderr, and the app might hang. See [https://bugzilla.gnome.org/show_bug.cgi?id=788313].<br />
<br />
In order to run {{ic|nm-applet}} with such notifications disabled, start the applet with the following command:<br />
$ nm-applet --no-agent<br />
<br />
{{Tip|{{ic|nm-applet}} might be started automatically with a [[Desktop_entries#Autostart|autostart desktop file]], to add the --no-agent option modify the Exec line there, i.e.<br />
<nowiki>Exec=nm-applet --no-agent</nowiki><br />
}}<br />
<br />
==== Appindicator ====<br />
<br />
Appindicator support is available in ''nm-applet'' however it is not compiled into the official package, see {{Bug|51740}}. To use nm-applet in an Appindicator environment, replace {{Pkg|network-manager-applet}} with {{AUR|network-manager-applet-indicator}} and then start the applet with the following command:<br />
$ nm-applet --indicator<br />
<br />
=== Command line ===<br />
<br />
The following applications can be useful for configuring and managing networks without X.<br />
<br />
==== nmcli ====<br />
<br />
A command line frontend, ''nmcli'', is included with {{Pkg|networkmanager}}.<br />
<br />
For usage information, see {{man|1|nmcli}}. Examples:<br />
<br />
* To connect to a wifi network: {{bc|nmcli dev wifi connect <SSID> password <password>}}<br />
* To connect to a hidden network: {{bc|nmcli dev wifi connect <SSID> password <password> hidden yes}}<br />
* To connect to a wifi on the {{ic|wlan1}} wifi interface: {{bc|nmcli dev wifi connect <SSID> password <password> iface wlan1 [profile name]}}<br />
* To disconnect an interface: {{bc|nmcli dev disconnect iface eth0}}<br />
* To reconnect an interface marked as disconnected: {{bc|nmcli con up uuid <uuid>}}<br />
* To get a list of UUIDs: {{bc|nmcli con show}}<br />
* To see a list of network devices and their state: {{bc|nmcli dev}}<br />
* To turn off wifi: {{bc|nmcli r wifi off}}<br />
<br />
==== nmtui ====<br />
<br />
A curses based graphical frontend, ''nmtui'', is included with {{Pkg|networkmanager}}.<br />
<br />
For usage information, see {{man|1|nmtui}}.<br />
<br />
==== nmcli-dmenu ====<br />
<br />
Alternatively there is {{AUR|networkmanager-dmenu-git}} which is a small script to manage NetworkManager connections with ''dmenu'' instead of {{ic|nm-applet}}. It provides all essential features such as connect to existing NetworkManager wifi or wired connections, connect to new wifi connections, requests passphrase if required, connect to existing VPN connections, enable/disable networking, launch ''nm-connection-editor'' GUI.<br />
<br />
== Configuration ==<br />
<br />
NetworkManager will require some additional steps to be able run properly. Make sure you have configured {{ic|/etc/hosts}} as described in [[Network configuration#Set the hostname]] section.<br />
<br />
=== Enable NetworkManager ===<br />
<br />
NetworkManager is [[systemd#Using units|controlled]] with the {{ic|NetworkManager.service}} [[systemd]] unit. Once the NetworkManager daemon is started, it will automatically connect to any available "system connections" that have already been configured. Any "user connections" or unconfigured connections will need ''nmcli'' or an applet to configure and connect.<br />
<br />
NetworkManager has a global configuration file at {{ic|/etc/NetworkManager/NetworkManager.conf}}. Usually no configuration needs to be done to the global defaults. Check if the default "NetworkManager.conf" sets any useful interface as unmanaged (for example, wlp4s0).<br />
<br />
=== Enable NetworkManager Wait Online ===<br />
<br />
If you have services which fail if they are started before the network is up, you may use {{ic|NetworkManager-wait-online.service}} in addition to {{ic|NetworkManager.service}}. This is, however, rarely necessary because most networked daemons start up okay, even if the network has not been configured yet.<br />
<br />
In some cases, the service will still fail to start successfully on boot due to the timeout setting in {{ic|/usr/lib/systemd/system/NetworkManager-wait-online.service}} being too short. Change the default timeout from 30 to a higher value.<br />
<br />
=== Set up PolicyKit permissions ===<br />
<br />
See [[General troubleshooting#Session permissions]] for setting up a working session.<br />
<br />
With a working session, you have several options for granting the necessary privileges to NetworkManager:<br />
<br />
* ''Option 1.'' Run a [[Polkit]] authentication agent when you log in, such as {{ic|/usr/lib/polkit-gnome/polkit-gnome-authentication-agent-1}} (part of {{Pkg|polkit-gnome}}). You will be prompted for your password whenever you add or remove a network connection.<br />
* ''Option 2.'' [[Users and groups#Group management|Add]] yourself to the {{ic|wheel}} group. You will not have to enter your password, but your user account may be granted other permissions as well, such as the ability to use [[sudo]] without entering the root password.<br />
* ''Option 3.'' [[Users and groups#Group management|Add]] yourself to the {{ic|network}} group and create the following file:<br />
<br />
{{hc|/etc/polkit-1/rules.d/50-org.freedesktop.NetworkManager.rules|<nowiki><br />
polkit.addRule(function(action, subject) {<br />
if (action.id.indexOf("org.freedesktop.NetworkManager.") == 0 && subject.isInGroup("network")) {<br />
return polkit.Result.YES;<br />
}<br />
});<br />
</nowiki>}}<br />
<br />
: All users in the {{ic|network}} group will be able to add and remove networks without a password. This will not work under [[systemd]] if you do not have an active session with ''systemd-logind''.<br />
<br />
=== Network services with NetworkManager dispatcher ===<br />
<br />
There are quite a few network services that you will not want running until NetworkManager brings up an interface. Good examples are [[NTPd]] and network filesystem mounts of various types (e.g. '''netfs'''). NetworkManager has the ability to start these services when you connect to a network and stop them when you disconnect. To activate the feature you need to [[start]] the {{ic|NetworkManager-dispatcher.service}}.<br />
<br />
Once the feature is active, scripts can be added to the {{ic|/etc/NetworkManager/dispatcher.d}} directory. These scripts must be '''owned by root''', otherwise the dispatcher will not execute them. For added security, set group ownership to root as well:<br />
<br />
# chown root:root ''scriptname''<br />
<br />
Also, the script must have '''write permission for owner only''', otherwise the dispatcher will not execute them:<br />
<br />
# chmod 755 ''scriptname''<br />
<br />
The scripts will be run in alphabetical order at connection time, and in reverse alphabetical order at disconnect time. They receive two arguments: the name of the interface (e.g. {{ic|eth0}}) and the status (''up'' or ''down'' for interfaces and ''vpn-up'' or ''vpn-down'' for vpn connections). To ensure what order they come up in, it is common to use numerical characters prior to the name of the script (e.g. {{ic|10_portmap}} or {{ic|30_netfs}} (which ensures that the ''portmapper'' is up before NFS mounts are attempted).<br />
<br />
{{Warning|If you connect to foreign or public networks, be aware of what services you are starting and what servers you expect to be available for them to connect to. You could make a security hole by starting the wrong services while connected to a public network}}<br />
<br />
==== Avoiding the dispatcher timeout ====<br />
<br />
If the above is working, then this section is not relevant. However, there is a general problem related to running dispatcher scripts which take longer to be executed. Initially an internal timeout of three seconds only was used. If the called script did not complete in time, it was killed. Later the timeout was extended to about 20 seconds (see the [https://bugzilla.redhat.com/show_bug.cgi?id=982734 Bugtracker] for more information). If the timeout still creates the problem, a work around may be to modify the dispatcher service file {{ic|/usr/lib/systemd/system/NetworkManager-dispatcher.service}} to remain active after exit: <br />
<br />
{{hc|/etc/systemd/system/NetworkManager-dispatcher.service|2=<br />
.include /usr/lib/systemd/system/NetworkManager-dispatcher.service<br />
[Service]<br />
RemainAfterExit=yes}}<br />
<br />
Now start and enable the modified {{ic|NetworkManager-dispatcher}} service.<br />
<br />
{{Warning|Adding the {{ic|RemainAfterExit}} line to it will prevent the dispatcher from closing. Unfortunately, the dispatcher '''has''' to close before it can run your scripts again. With it the dispatcher will not time out but it also will not close, which means that the scripts will only run once per boot. Therefore, do not add the line unless the timeout is definitely causing a problem.}}<br />
<br />
==== Start OpenNTPD ====<br />
<br />
Install the {{Pkg|networkmanager-dispatcher-openntpd}} package.<br />
<br />
==== Mount remote folder with sshfs ====<br />
<br />
As the script is run in a very restrictive environment, you have to export {{ic|SSH_AUTH_SOCK}} in order to connect to your SSH agent. There are different ways to accomplish this, see [https://bbs.archlinux.org/viewtopic.php?pid=1042030#p1042030 this message] for more information. The example below works with [[GNOME Keyring]], and will ask you for the password if not unlocked already. In case NetworkManager connects automatically on login, it is likely ''gnome-keyring'' has not yet started and the export will fail (hence the sleep). The {{ic|UUID}} to match can be found with the command {{ic|nmcli con status}} or {{ic|nmcli con list}}. <br />
<br />
{{bc|<nowiki><br />
#!/bin/sh<br />
USER='username'<br />
REMOTE='user@host:/remote/path'<br />
LOCAL='/local/path'<br />
<br />
interface=$1 status=$2<br />
if [ "$CONNECTION_UUID" = "</nowiki>''uuid''<nowiki>" ]; then<br />
case $status in<br />
up)<br />
export SSH_AUTH_SOCK=$(find /tmp -maxdepth 1 -type s -user "$USER" -name 'ssh')<br />
su "$USER" -c "sshfs $REMOTE $LOCAL"<br />
;;<br />
down)<br />
fusermount -u "$LOCAL"<br />
;;<br />
esac<br />
fi<br />
</nowiki>}}<br />
<br />
==== Use dispatcher to automatically toggle Wi-Fi depending on LAN cable being plugged in ====<br />
<br />
The idea is to only turn Wi-Fi on when the LAN cable is unplugged (for example when detaching from a laptop dock), and for Wi-Fi to be automatically disabled, once a LAN cable is plugged in again. <br />
<br />
Create the following dispatcher script ([http://superuser.com/questions/233448/disable-wlan-if-wired-cable-network-is-available Source]), replacing {{ic|1=LAN_interface}} with yours.<br />
{{hc|/etc/NetworkManager/dispatcher.d/wlan_auto_toggle.sh|<nowiki><br />
#!/bin/sh<br />
<br />
if [ "$1" = "LAN_interface" ]; then<br />
case "$2" in<br />
up)<br />
nmcli radio wifi off<br />
;;<br />
down)<br />
nmcli radio wifi on<br />
;;<br />
esac<br />
fi<br />
</nowiki>}}<br />
{{Note|You can get a list of interfaces using [[#nmcli|nmcli]]. The ethernet (LAN) interfaces start with {{ic|en}}, e.g. {{ic|1=enp0s5}}}}<br />
<br />
==== Use dispatcher to connect to a VPN after a network connection is established ====<br />
<br />
In this example we want to connect automatically to a previously defined VPN connection after connecting to a specific Wi-Fi network. First thing to do is to create the dispatcher script that defines what to do after we are connected to the network.<br />
<br />
===== Create the dispatcher script =====<br />
<br />
{{hc|/etc/NetworkManager/dispatcher.d/vpn-up|<nowiki><br />
#!/bin/sh<br />
VPN_NAME="name of VPN connection defined in NetworkManager"<br />
ESSID="Wi-Fi network ESSID (not connection name)"<br />
<br />
interface=$1 status=$2<br />
case $status in<br />
up|vpn-down)<br />
if iwgetid | grep -qs ":\"$ESSID\""; then<br />
nmcli con up id "$VPN_NAME"<br />
fi<br />
;;<br />
down)<br />
if iwgetid | grep -qs ":\"$ESSID\""; then<br />
if nmcli con show --active | grep "$VPN_NAME"; then<br />
nmcli con down id "$VPN_NAME"<br />
fi<br />
fi<br />
;;<br />
esac<br />
</nowiki>}}<br />
<br />
If you would like to attempt to automatically connect to VPN for all Wi-Fi networks, you can use the following definition of the ESSID: {{ic|1=ESSID=$(iwgetid -r)}}. Remember to set the script's permissions [[#Network services with NetworkManager dispatcher|accordingly]]. <br />
<br />
===== Give the script access to VPN password =====<br />
<br />
Trying to connect with the above script may still fail with {{ic|NetworkManager-dispatcher.service}} complaining about 'no valid VPN secrets', because of [http://developer.gnome.org/NetworkManager/0.9/secrets-flags.html the way VPN secrets are stored]. Fortunately, there are different options to give the above script access to your VPN password.<br />
<br />
1: One of them requires editing the VPN connection configuration file to make NetworkManager store the secrets by itself rather than inside a keyring [https://bugzilla.redhat.com/show_bug.cgi?id=710552 that will be inaccessible for root]: open up {{ic|/etc/NetworkManager/system-connections/''name of your VPN connection''}} and change the {{ic|password-flags}} and {{ic|secret-flags}} from {{ic|1}} to {{ic|0}}.<br />
<br />
If that alone doesn't work, you may have to create a {{ic|passwd-file}} in a safe location with the same permissions and ownership as the dispatcher script, containing the following:<br />
{{hc|/path/to/passwd-file|<nowiki><br />
vpn.secrets.password:YOUR_PASSWORD<br />
</nowiki>}}<br />
<br />
The script must be changed accordingly, so that it gets the password from the file:<br />
<br />
{{hc|/etc/NetworkManager/dispatcher.d/vpn-up|<nowiki><br />
#!/bin/sh<br />
VPN_NAME="name of VPN connection defined in NetworkManager"<br />
ESSID="Wi-Fi network ESSID (not connection name)"<br />
<br />
interface=$1 status=$2<br />
case $status in<br />
up|vpn-down)<br />
if iwgetid | grep -qs ":\"$ESSID\""; then<br />
nmcli con up id "$VPN_NAME" passwd-file /path/to/passwd-file<br />
fi<br />
;;<br />
down)<br />
if iwgetid | grep -qs ":\"$ESSID\""; then<br />
if nmcli con show --active | grep "$VPN_NAME"; then<br />
nmcli con down id "$VPN_NAME"<br />
fi<br />
fi<br />
;;<br />
esac<br />
</nowiki>}}<br />
<br />
2: Alternatively, change the {{ic|password-flags}} and put the password directly in the configuration file adding the section {{ic|vpn-secrets}}:<br />
<br />
[vpn]<br />
....<br />
password-flags=0<br />
<br />
[vpn-secrets]<br />
password=''your_password''<br />
<br />
{{Note|It may now be necessary to re-open the NetworkManager connection editor and save the VPN passwords/secrets again.}}<br />
<br />
==== Use dispatcher to handle mounting of CIFS shares ====<br />
<br />
Some CIFS shares are only available on certain networks or locations (e.g. at home). You can use the dispatcher to only mount CIFS shares that are present at your current location.<br />
<br />
The following script will check if we connected to a specific network and mount shares accordingly:<br />
{{hc|/etc/NetworkManager/dispatcher.d/mount_cifs|<nowiki><br />
#!/bin/bash<br />
if [ "$2" = "up" ]; then<br />
if [ "$CONNECTION_UUID" = "uuid" ]; then<br />
mount /your/mount/point & <br />
# add more shares as needed<br />
fi<br />
fi<br />
</nowiki>}}<br />
{{Note|You can get a list of uuids using [[#nmcli|nmcli]].}}<br />
<br />
The following script will unmount all CIFS before a disconnect from a specific network:<br />
{{hc|/etc/NetworkManager/dispatcher.d/pre-down.d/mount_cifs|<nowiki><br />
#!/bin/bash<br />
umount -a -l -t cifs<br />
</nowiki>}}<br />
{{Note|Make sure this script is located in the pre-down.d subdirectory as shown above, otherwise it will unmount all shares on any connection state change.}}<br />
{{Note|Ever since NetworkManager 0.9.8, the 'pre-down' and 'down' actions are not executed on shutdown or restart, so the above script will only work if you manually disconnect from the network. See [https://bugzilla.gnome.org/show_bug.cgi?id&#61;701242 this bug report] for more info.}}<br />
<br />
As before, do not forget to set the script permissions [[#Network services with NetworkManager dispatcher|accordingly]].<br />
<br />
See also [[NFS#NetworkManager dispatcher]] for another example script that parses {{ic|/etc/fstab}} mounts during dispatcher actions.<br />
<br />
=== Proxy settings ===<br />
<br />
NetworkManager does not directly handle proxy settings, but if you are using GNOME or KDE, you could use [http://marin.jb.free.fr/proxydriver/ proxydriver] wich handles proxy settings using NetworkManager's informations. proxydriver is found in the package {{AUR|proxydriver}}.<br />
<br />
In order for ''proxydriver'' to be able to change the proxy settings, you would need to execute this command, as part of the GNOME startup process (System -> Preferences -> Startup Applications):<br />
<br />
xhost +si:localuser:''your_username''<br />
<br />
See: [[Proxy settings]].<br />
<br />
=== Disable NetworkManager ===<br />
<br />
It might not be obvious, but the service automatically starts through ''dbus''. To completely disable it you can [[mask]] the services {{ic|NetworkManager}} and {{ic|NetworkManager-dispatcher}}.<br />
<br />
=== Checking connectivity ===<br />
<br />
{{Accuracy|"the desktop manager" might handle captive portals, but this is mostly done through {{aur|capnet-assist}}}}<br />
<br />
NetworkManager can try to reach a page on Internet when connecting to a network. {{Pkg|networkmanager}} is configured by default in {{ic|/usr/lib/NetworkManager/conf.d/20-connectivity.conf}} to check connectivity to archlinux.org. To use a different webserver or disable connectivity checking edit {{ic|/etc/NetworkManager/NetworkManager.conf}}, see "connectivity section" in {{man|5|NetworkManager.conf}}.<br />
<br />
For those behind a captive portal, the desktop manager can automatically open a window asking for credentials.<br />
<br />
== Testing ==<br />
<br />
NetworkManager applets are designed to load upon login so no further configuration should be necessary for most users. If you have already disabled your previous network settings and disconnected from your network, you can now test if NetworkManager will work. The first step is to [[start]] {{ic|NetworkManager.service}}.<br />
<br />
Some applets will provide you with a {{ic|.desktop}} file so that the NetworkManager applet can be loaded through the application menu. If it does not, you are going to either have to discover the command to use or logout and login again to start the applet. Once the applet is started, it will likely begin polling network connections with for auto-configuration with a DHCP server.<br />
<br />
To start the GNOME applet in non-xdg-compliant window managers like [[awesome]]:<br />
<br />
nm-applet --sm-disable &<br />
<br />
For static IP addresses, you will have to configure NetworkManager to understand them. The process usually involves right-clicking the applet and selecting something like 'Edit Connections'.<br />
<br />
== Troubleshooting ==<br />
<br />
=== No prompt for password of secured Wi-Fi networks ===<br />
<br />
When trying to connect to a secured Wi-Fi network, no prompt for a password is shown and no connection is established. This happens when no keyring package is installed. An easy solution is to install {{Pkg|gnome-keyring}}. If you want the passwords to be stored in encrypted form, follow [[GNOME Keyring]] to set up the ''gnome-keyring-daemon''.<br />
<br />
=== No traffic via PPTP tunnel ===<br />
<br />
PPTP connection logins successfully; you see a ppp0 interface with the correct VPN IP address, but you cannot even ping the remote IP address. It is due to lack of MPPE (Microsoft Point-to-Point Encryption) support in stock Arch pppd. It is recommended to first try with the stock Arch {{Pkg|ppp}} as it may work as intended.<br />
<br />
To solve the problem it should be sufficient to install the {{AUR|ppp-mppe}}{{Broken package link|{{aur-mirror|ppp-mppe}}}} package.<br />
<br />
See also [[WPA2 Enterprise#MS-CHAPv2]].<br />
<br />
=== Network management disabled ===<br />
<br />
When NetworkManager shuts down but the pid (state) file is not removed, you will see a {{ic|Network management disabled}} message. If this happens, remove the file manually:<br />
<br />
# rm /var/lib/NetworkManager/NetworkManager.state<br />
<br />
=== Problems with internal DHCP client ===<br />
<br />
If you have problems with getting an IP address using the internal DHCP client, consider {{Pkg|dhclient}} as DHCP client.<br />
<br />
After installation, update the NetworkManager config file:<br />
<br />
{{hc|1=/etc/NetworkManager/NetworkManager.conf|2=<br />
[main]<br />
# ...<br />
dhcp=dhclient<br />
# ...<br />
}}<br />
<br />
This workaround might solve problems in big wireless networks like eduroam.<br />
<br />
=== Customizing resolv.conf ===<br />
<br />
See the main page: [[resolv.conf]]. If you use {{Pkg|dhclient}}, you may try the {{AUR|networkmanager-dispatch-resolv}}{{Broken package link|{{aur-mirror|networkmanager-dispatch-resolv}}}} package.<br />
<br />
=== DHCP problems with dhclient ===<br />
<br />
If you have problems with getting an IP address via DHCP, try to add the following to your {{ic|/etc/dhclient.conf}}:<br />
<br />
interface "eth0" {<br />
send dhcp-client-identifier 01:aa:bb:cc:dd:ee:ff;<br />
}<br />
<br />
Where {{ic|aa:bb:cc:dd:ee:ff}} is the MAC address of this NIC. The MAC address can be found using the {{ic|ip link show ''interface''}} command from the {{Pkg|iproute2}} package.<br />
<br />
=== Hostname problems ===<br />
<br />
NetworkManager utilizes {{Pkg|dhclient}} in default and falls back to its internal DHCP funtionality, if the former is not installed. To make ''dhclient'' forward the hostname requires to set a non-default option, ''dhcpcd'' forwards the hostname by default. <br />
<br />
First, check which DHCP client is used (''dhclient'' in this example):<br />
<br />
{{hc|<nowiki># journalctl -b | egrep "dhc"</nowiki>|<br />
...<br />
Nov 17 21:03:20 zenbook dhclient[2949]: Nov 17 21:03:20 zenbook dhclient[2949]: Bound to *:546<br />
Nov 17 21:03:20 zenbook dhclient[2949]: Listening on Socket/wlan0<br />
Nov 17 21:03:20 zenbook dhclient[2949]: Sending on Socket/wlan0<br />
Nov 17 21:03:20 zenbook dhclient[2949]: XMT: Info-Request on wlan0, interval 1020ms.<br />
Nov 17 21:03:20 zenbook dhclient[2949]: RCV: Reply message on wlan0 from fe80::126f:3fff:fe0c:2dc.<br />
}}<br />
<br />
==== Configure dhclient to push the hostname to the DHCP server ====<br />
<br />
Copy the example configuration file:<br />
<br />
# cp /usr/share/dhclient/dhclient.conf.example /etc/dhclient.conf<br />
<br />
Take a look at the file - there will only really be one line we want to keep and ''dhclient'' will use it's defaults (as it has been using if you did not have this file) for the other options. This is the important line:<br />
<br />
{{hc|/etc/dhclient.conf|2=send host-name = pick-first-value(gethostname(), "ISC-dhclient");}}<br />
<br />
Force an IP address renewal by your favorite means, and you should now see your hostname on your DHCP server.<br />
<br />
IPv6 push host name:<br />
<br />
# cp /usr/share/dhclient/dhclient.conf.example /etc/dhclient6.conf<br />
<br />
{{hc|/etc/dhclient6.conf|2=send fqdn.fqdn = pick-first-value(gethostname(), "ISC-dhclient");}}<br />
<br />
==== Configure NetworkManager to use a specific DHCP client ====<br />
<br />
If you want to explicitly set the DHCP client used by NetworkManager, it can be set in the global configuration: <br />
<br />
{{hc|1=/etc/NetworkManager/NetworkManager.conf|2=dhcp=internal}}<br />
<br />
The alternative {{ic|1=dhcp=dhclient}} is used per default, if this option is not set. <br />
<br />
Then [[restart]] {{ic|NetworkManager.service}}.<br />
<br />
{{Note|1=Support for {{Pkg|dhcpcd}} has been [https://projects.archlinux.org/svntogit/packages.git/commit/trunk?h=packages/networkmanager&id=a1df79cbcebaec0c043789eb31965e57d17b6cdb disabled] in {{Pkg|networkmanager}}-1.0.0-2 (2015-02-14).}}<br />
<br />
=== Missing default route ===<br />
<br />
On at least one KDE4 system, no default route was created when establishing wireless connections with NetworkManager. Changing the route settings of the wireless connection to remove the default selection "Use only for resources on this connection" solved the issue.<br />
<br />
=== 3G modem not detected ===<br />
<br />
See [[USB 3G Modem#Network Manager]].<br />
<br />
=== Switching off WLAN on laptops ===<br />
<br />
Sometimes NetworkManager will not work when you disable your Wi-Fi adapter with a switch on your laptop and try to enable it again afterwards. This is often a problem with ''rfkill''. To check if the driver notifies ''rfkill'' about the wireless adapter's status, use:<br />
<br />
$ watch -n1 rfkill list all<br />
<br />
If one identifier stays blocked after you switch on the adapter you could try to manually unblock it with (where X is the number of the identifier provided by the above output):<br />
<br />
# rfkill event unblock X<br />
<br />
=== Static IP address settings revert to DHCP ===<br />
<br />
Due to an unresolved bug, when changing default connections to a static IP address, {{ic|nm-applet}} may not properly store the configuration change, and will revert to automatic DHCP.<br />
<br />
To work around this issue you have to edit the default connection (e.g. "Auto eth0") in {{ic|nm-applet}}, change the connection name (e.g. "my eth0"), uncheck the "Available to all users" checkbox, change your static IP address settings as desired, and click '''Apply'''. This will save a new connection with the given name.<br />
<br />
Next, you will want to make the default connection not connect automatically. To do so, run {{ic|nm-connection-editor}} ('''not''' as root). In the connection editor, edit the default connection (e.g. "Auto eth0") and uncheck "Connect automatically". Click '''Apply''' and close the connection editor.<br />
<br />
=== Cannot edit connections as normal user ===<br />
<br />
See [[#Set up PolicyKit permissions]].<br />
<br />
=== Forget hidden wireless network ===<br />
<br />
Since hidden networks are not displayed in the selection list of the Wireless view, they cannot be forgotten (removed) with the GUI. You can delete one with the following command:<br />
<br />
# rm /etc/NetworkManager/system-connections/''SSID''<br />
<br />
This works for any other connection.<br />
<br />
=== VPN not working in GNOME ===<br />
<br />
When setting up OpenConnect or vpnc connections in NetworkManager while using GNOME, you will sometimes never see the dialog box pop up and the following error appears in {{ic|/var/log/errors.log}}:<br />
<br />
localhost NetworkManager[399]: <error> [1361719690.10506] [nm-vpn-connection.c:1405] get_secrets_cb(): Failed to request VPN secrets #3: (6) No agents were available for this request.<br />
<br />
This is caused by the GNOME NM Applet expecting dialog scripts to be at {{ic|/usr/lib/gnome-shell}}, when NetworkManager's packages put them in {{ic|/usr/lib/networkmanager}}.<br />
As a "temporary" fix (this bug has been around for a while now), make the following symlink(s):<br />
<br />
* For OpenConnect: {{ic|ln -s /usr/lib/networkmanager/nm-openconnect-auth-dialog /usr/lib/gnome-shell/}}<br />
* For VPNC (i.e. Cisco VPN): {{ic|ln -s /usr/lib/networkmanager/nm-vpnc-auth-dialog /usr/lib/gnome-shell/}}<br />
<br />
This may need to be done for any other NM VPN plugins as well, but these are the two most common.<br />
<br />
=== Unable to connect to visible European wireless networks ===<br />
<br />
WLAN chips are shipped with a default [[Wireless network configuration#Respecting the regulatory domain|regulatory domain]]. If your access point does not operate within these limitations, you will not be able to connect to the network. Fixing this is easy:<br />
<br />
# [[Install]] {{Pkg|crda}}<br />
# Uncomment the correct Country Code in {{ic|/etc/conf.d/wireless-regdom}}<br />
# Reboot the system, because the setting is only read on boot<br />
<br />
=== Automatic connect to VPN on boot is not working ===<br />
<br />
The problem occurs when the system (i.e. NetworkManager running as the root user) tries to establish a VPN connection, but the password is not accessible because it is stored in the Gnome keyring of a particular user. <br />
<br />
A solution is to keep the password to your VPN in plaintext, as described in step (2.) of [[#Use dispatcher to connect to a VPN after a network connection is established]]. <br />
<br />
You do not need to use the dispatcher described in step (1.) to auto-connect anymore, if you use the new "auto-connect VPN" option from the {{ic|nm-applet}} GUI.<br />
<br />
=== Systemd Bottleneck ===<br />
<br />
Over time the log files ({{ic|/var/log/journal}}) can become very large. This can have a big impact on boot performance when using NetworkManager, see: [[Systemd#Boot time increasing over time]].<br />
<br />
=== Regular network disconnects, latency and lost packets (WiFi) ===<br />
<br />
NetworkManager does a scan every 2 minutes.<br />
<br />
Some WiFi drivers have issues when scanning for base stations whilst connected/associated. Symptoms include VPN disconnects/reconnects and lost packets, web pages failing to load and then refresh fine.<br />
<br />
Running {{ic|journalctl -f}} will indicate that this is taking place, messages like the following will be contained in the logs at regular intervals.<br />
<br />
NetworkManager[410]: <info> (wlp3s0): roamed from BSSID 00:14:48:11:20:CF (my-wifi-name) to (none) ((none))<br />
<br />
There is a patched version of NetworkManager which should prevent this type of scanning: {{AUR|networkmanager-noscan}}.<br />
<br />
Alternatively, if roaming is not important, the periodic scanning behavior can be disabled by locking the BSSID of the access point in the WiFi connection profile.<br />
<br />
=== Unable to turn on wi-fi with Lenovo laptop (IdeaPad, Legion, etc.) ===<br />
<br />
There is an issue with the {{ic|ideapad_laptop}} module on some Lenovo models due to the wi-fi driver incorrectly reporting a soft block. The card can still be manipulated with {{ic|netctl}}, but managers like NetworkManager break. You can verify that this is the problem by checking the output of {{ic|rfkill list}} after toggling your hardware switch and seeing that the soft block persists.<br />
<br />
{{Accuracy|Try to use {{ic|rfkill.default_state}} and {{ic|rfkill.master_switch_mode}} (see [https://github.com/torvalds/linux/blob/master/Documentation/admin-guide/kernel-parameters.txt kernel-parameters.txt]) to fix the rfkill problem.}}<br />
<br />
[[modprobe|Unloading]] the {{ic|ideapad_laptop}} module should fix this. ('''warning''': this may disable the laptop keyboard and touchpad also!).<br />
<br />
== Tips and tricks ==<br />
<br />
=== Encrypted Wi-Fi passwords ===<br />
<br />
By default, NetworkManager stores passwords in clear text in the connection files at {{ic|/etc/NetworkManager/system-connections/}}. To print the stored passwords, use the following command:<br />
<br />
# grep -H '^psk=' /etc/NetworkManager/system-connections/*<br />
<br />
The passwords are accessible to the root user in the filesystem and to users with access to settings via the GUI (e.g. {{ic|nm-applet}}). <br />
<br />
It is preferable to save the passwords in encrypted form in a keyring instead of clear text. The downside of using a keyring is that the connections have to be set up for each user.<br />
<br />
====Using Gnome-Keyring====<br />
<br />
The keyring daemon has to be started and the keyring needs to be unlocked for the following to work.<br />
<br />
Furthermore, NetworkManager needs to be configured not to store the password for all users. Using GNOME {{ic|nm-applet}}, run {{ic|nm-connection-editor}} from a terminal, select a network connection, click {{ic|Edit}}, select the {{ic|Wifi-Security}} tab and click on the right icon of password and check {{ic|Store the password only for this user}}.<br />
<br />
====Using KDE Wallet====<br />
<br />
{{Out of date|{{Pkg|plasma-nm}} has a different interface.}}<br />
<br />
Using KDE's {{Pkg|kdeplasma-applets-plasma-nm}}{{Broken package link|{{aur-mirror|kdeplasma-applets-plasma-nm}}}}, click the applet, click on the top right {{ic|Settings}} icon, double click on a network connection, in the {{ic|General settings}} tab, untick {{ic|all users may connect to this network}}. If the option is ticked, the passwords will still be stored in clear text, even if a keyring daemon is running.<br />
<br />
If the option was selected previously and you un-tick it, you may have to use the {{ic|reset}} option first to make the password disappear from the file. Alternatively, delete the connection first and set it up again.<br />
<br />
=== Sharing internet connection over Wi-Fi ===<br />
<br />
You can share your internet connection (e.g. 3G or wired) with a few clicks using nm. You will need a supported Wi-Fi card (Cards based on Atheros AR9xx or at least AR5xx are probably best choice). Please note that a [[firewall]] may interfere with internet sharing.<br />
<br />
==== Ad-hoc ====<br />
<br />
{{Style|"I think so"...}}<br />
<br />
* [[Install]] the {{Pkg|dnsmasq}} package to be able to actually share the connection.<br />
* Custom {{ic|dnsmasq.conf}} may interfere with NetworkManager (not sure about this, but I think so).<br />
* Click on applet and choose "Create new wireless network".<br />
* Follow wizard (if using WEP, be sure to use 5 or 13 character long password, different lengths will fail).<br />
* Settings will remain stored for the next time you need it.<br />
<br />
==== Real AP ====<br />
<br />
Support of infrastructure mode (which is needed by Android phones as they intentionally do not support ad-hoc) is added by NetworkManager as of late 2012.<br />
<br />
See [https://fedoraproject.org/wiki/Features/RealHotspot Fedora's wiki].<br />
<br />
=== Sharing internet connection over Ethernet ===<br />
<br />
Scenario: your device has internet connection over wi-fi and you want to share the internet connection to other devices over ethernet.<br />
<br />
Requirements:<br />
* [[Install]] the {{Pkg|dnsmasq}} package to be able to actually share the connection.<br />
* Your internet connected device and the other devices are connected over a suitable ethernet cable (this usually means a cross over cable or a switch in between).<br />
* Internet sharing is not blocked by a [[firewall]].<br />
<br />
Steps:<br />
* Run {{ic|nm-connection-editor}} from terminal.<br />
* Add a new ethernet connection.<br />
* Give it some sensible name. For example "Shared Internet"<br />
* Go to "IPv4 Settings".<br />
* For "Method:" select "Shared to other computers".<br />
* Save<br />
<br />
Now you should have a new option "Shared Internet" under the Wired connections in NetworkManager.<br />
<br />
=== Checking if networking is up inside a cron job or script ===<br />
<br />
Some ''cron'' jobs require networking to be up to succeed. You may wish to avoid running these jobs when the network is down. To accomplish this, add an '''if''' test for networking that queries NetworkManager's ''nm-tool'' and checks the state of networking. The test shown here succeeds if any interface is up, and fails if they are all down. This is convenient for laptops that might be hardwired, might be on wireless, or might be off the network.<br />
<br />
{{bc|<nowiki><br />
if [ $(nm-tool|grep State|cut -f2 -d' ') == "connected" ]; then<br />
#Whatever you want to do if the network is online<br />
else<br />
#Whatever you want to do if the network is offline - note, this and the else above are optional<br />
fi<br />
</nowiki>}}<br />
<br />
This useful for a {{ic|cron.hourly}} script that runs ''fpupdate'' for the F-Prot virus scanner signature update, as an example. Another way it might be useful, with a little modification, is to differentiate between networks using various parts of the output from ''nm-tool''; for example, since the active wireless network is denoted with an asterisk, you could grep for the network name and then grep for a literal asterisk.<br />
<br />
=== Connect to network with secret on boot ===<br />
<br />
By default, NetworkManager will not connect to networks requiring a secret automatically on boot. This is because it locks such connections to the user who makes it by default, only connecting after they have logged in. To change this, do the following:<br />
<br />
# Right click on the {{ic|nm-applet}} icon in your panel and select Edit Connections and open the Wireless tab<br />
# Select the connection you want to work with and click the Edit button<br />
# Check the boxes “Connect Automatically” and “Available to all users”<br />
Log out and log back in to complete.<br />
<br />
=== Automatically unlock keyring after login ===<br />
<br />
NetworkManager requires access to the login keyring to connect to networks requiring a secret. Under most circumstances, this keyring is unlocked automatically at login, but if it isn't, and NetworkManager isn't connecting on login, you can try the following.<br />
<br />
==== GNOME ====<br />
<br />
{{Note|The following method is dated and known not to work on at least one machine!}}<br />
* In {{ic|/etc/pam.d/gdm}} (or your corresponding daemon in {{ic|/etc/pam.d}}), add these lines at the end of the "auth" and "session" blocks if they do not exist already: <br />
auth optional pam_gnome_keyring.so<br />
session optional pam_gnome_keyring.so auto_start<br />
<br />
* In {{ic|/etc/pam.d/passwd}}, use this line for the 'password' block:<br />
password optional pam_gnome_keyring.so<br />
<br />
:Next time you log in, you should be asked if you want the password to be unlocked automatically on login.<br />
<br />
==== SLiM login manager ====<br />
<br />
See [[SLiM#Gnome Keyring]].<br />
<br />
==== Troubleshooting ====<br />
<br />
While you may type both values at connection time, {{Pkg|kdeplasma-applets-plasma-nm}}{{Broken package link|{{aur-mirror|kdeplasma-applets-plasma-nm}}}} 0.9.3.2-1 and above are capable of retrieving OpenConnect username and password directly from KWallet.<br />
<br />
Open "KDE Wallet Manager" and look up your OpenConnect VPN connection under "Network Management|Maps". Click "Show values" and <br />
enter your credentials in key "VpnSecrets" in this form (replace ''username'' and ''password'' accordingly):<br />
<br />
form:main:username%SEP%''username''%SEP%form:main:password%SEP%''password''<br />
<br />
Next time you connect, username and password should appear in the "VPN secrets" dialog box.<br />
<br />
=== Ignore specific devices ===<br />
<br />
Sometimes it may be desired that NetworkManager ignores specific devices and does not try to configure addresses and routes for them. You can quickly and easily ignore devices by MAC or interface-name by using the following in {{ic|/etc/NetworkManager/NetworkManager.conf}}:<br />
[keyfile]<br />
unmanaged-devices=mac:00:22:68:1c:59:b1;mac:00:1E:65:30:D1:C4;interface-name:eth0<br />
After you have put this in, [[Daemon|restart]] NetworkManager, and you should be able to configure interfaces without NetworkManager altering what you have set.<br />
<br />
=== Enable DNS Caching ===<br />
<br />
See [[dnsmasq#NetworkManager]] to enable the plugin that allows DNS caching using [[dnsmasq]].<br />
<br />
=== Configuring MAC Address Randomization ===<br />
<br />
MAC randomization can be used for increased privacy by not disclosing your real MAC address to the network.<br />
<br />
NetworkManager supports two types MAC Address Randomization: randomization during scanning, and for network connections. Both modes can be configured by modifying {{ic|/etc/NetworkManager/NetworkManager.conf}} or by creating a separate configuration file in {{ic|/etc/NetworkManager/conf.d}} which is recommended since the aforementioned config file may be overwritten by NetworkManager.<br />
<br />
Randomization during Wi-Fi scanning is enabled by default, but it may be disabled by adding the following lines to {{ic|/etc/NetworkManager/NetworkManager.conf}} or a dedicated configuration file under {{ic|/etc/NetworkManager/conf.d}}. This results in a randomly generated MAC address being used when probing for wireless networks.<br />
<br />
[device]<br />
wifi.scan-rand-mac-address=no<br />
<br />
{{Tip|1=Disabling MAC address randomization may be needed for stable connection. See [https://bbs.archlinux.org/viewtopic.php?id=220101].}}<br />
<br />
MAC randomization for network connections can be set to different modes for both wireless and ethernet interfaces. See [https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/ the Gnome blog post] for more details on the different modes. <br />
<br />
In terms of MAC randomization the most important modes are stable and random. Stable generates a random MAC address when you connect to a new network and associates the two permanently. This means that you will use the same MAC address every time you connect to that network. In contrast, random will generate a new MAC address every time you connect to a network, new or previously known. You can configure the MAC randomization by adding the desired configuration under {{ic|/etc/NetworkManager/conf.d}}.<br />
<br />
[device-mac-randomization]<br />
# "yes" is already the default for scanning<br />
wifi.scan-rand-mac-address=yes<br />
<br />
[connection-mac-randomization]<br />
# Randomize MAC for every ethernet connection<br />
ethernet.cloned-mac-address=random<br />
# Generate a random MAC for each WiFi and associate the two permanently.<br />
wifi.cloned-mac-address=stable<br />
<br />
You can read more about it [https://blogs.gnome.org/thaller/2016/08/26/mac-address-spoofing-in-networkmanager-1-4-0/ here]<br />
<br />
=== Enable IPv6 Privacy Extensions ===<br />
<br />
See [[IPv6#NetworkManager]].<br />
<br />
=== Working with wired connections ===<br />
<br />
By default, NetworkManager generates a connection profile for each wired ethernet connection it finds. At the point when generating the connection, it does not know whether there will be more ethernet adapters available. Hence, it calls the first wired connection "Wired connection 1". You can avoid generating this connection, by configuring "no-auto-default" (see `man NetworkManager.conf`), or by simply deleting it. Then NetworkManager will remember not to generate a connection for this interface again.<br />
<br />
You can also edit the connection (and persist it to disk) or delete it. NetworkManager will not re-generate a new connection. Then you can change the name to whatever you want. You can use something like nm-connection-editor for this task.<br />
<br />
== See also ==<br />
<br />
* [http://blogs.gnome.org/dcbw/2015/02/16/networkmanager-for-administrators-part-1/ NetworkManager for Administrators Part 1]<br />
* [[Wikipedia:NetworkManager]]</div>Recolichttps://wiki.archlinux.org/index.php?title=WPS_Office_(%E7%AE%80%E4%BD%93%E4%B8%AD%E6%96%87)&diff=479843WPS Office (简体中文)2017-06-15T14:36:17Z<p>Recolic: 提供的AUR已经失效,所需资源在另一个AUR包中有提供。</p>
<hr />
<div>[[Category:Office (简体中文)]]<br />
[http://linux.wps.cn/ WPS Office for Linux] 是金山公司推出的、运行于 Linux 平台上的全功能办公软件。与 Microsoft Office 高度兼容,且更加尊重 Linux 用户特定的使用习惯,并自带方正字体集。<br />
<br />
== 安装 ==<br />
自alpha18开始,WPS已经有x86_64版本,故所有用户均可直接安装 [[AUR (简体中文)]] 中的 wps-office 即可。<br />
<br />
当然,你也可以通过添加[https://www.archlinuxcn.org/archlinux-cn-repo-and-mirror/ archlinuxcn源]后用pacman安装(仅限64位arch用户)。<br />
<br />
{{注意|请留意自带字体的版权状况,可阅读 [http://community.wps.cn/wiki/WPS_Office_Linux%E7%89%88%E6%9C%80%E7%BB%88%E7%94%A8%E6%88%B7%E5%8D%8F%E8%AE%AE WPS Office Linux 版最终用户协议] 第十四条}}<br />
<br />
此外 [[AUR (简体中文)]] 还包含了可自定义安装字体、模板的 {{AUR|wpsforlinux}}{{Broken package link|{{aur-mirror|wpsforlinux}}}},不自带字体、模板的 {{AUR|wps-office-split}}{{Broken package link|{{aur-mirror|wps-office-split}}}}、提供 fcitx immodule 的 {{AUR|fcitx-wps}}{{Broken package link|{{aur-mirror|fcitx-wps}}}} 等。<br />
<br />
== 提示与技巧 ==<br />
<br />
=== 修改 WPS 文件图标以及文件关联 ===<br />
安装 WPS 后,您所用 icon-theme 中的 DOC、XLS、PPT 等文件会被替换成 WPS Office 所自带的 WPS 文字、ET 表格、WPP 演示等图标。如果您并不需要,可自行修改相关的 mime 配置文件:<br />
<br />
/usr/share/mime/packages/wps-office-{wpp,wps,et}.xml<br />
/usr/share/mime/packages/freedesktop.org.xml #(属于软件包shared-mime-info)<br />
<br />
以及 desktop 文件:<br />
<br />
/usr/share/applications/wps-office-{wpp,wps,et}.desktop<br />
<br />
处理策略:WPS 自己的格式由 {{ic|wps-office-{wpp,wps,et}.xml}} 定义,其他的用 {{ic|freedesktop.org.xml}} 定义。同时修改 {{ic|desktop}} 文件的 {{ic|MimeType}} 项。<br />
<br />
在 PKGBUILD 文件中的 {{ic|package}} 函数添加以下语句:<br />
<br />
{{bc|1=<br />
##et wpp wps 支持的MimeType<br />
_etMT="MimeType=application\/wps-office.et;application\/wps-office.ett;application\/vnd.ms-excel;\<br />
application\/vnd.openxmlformats-officedocument.spreadsheetml.template;\<br />
application\/vnd.openxmlformats-officedocument.spreadsheetml.sheet;"<br />
_wppMT="MimeType=application\/wps-office.dps;application\/wps-office.dpt;application\/vnd.ms-powerpoint;\<br />
application\/vnd.openxmlformats-officedocument.presentationml.presentation;\<br />
application\/vnd.openxmlformats-officedocument.presentationml.slideshow;\<br />
application\/vnd.openxmlformats-officedocument.presentationml.template;"<br />
_wpsMT="MimeType=application\/wps-office.wps;application\/wps-office.wpt;\<br />
application\/msword;application\/rtf;application\/msword-template;\<br />
application\/vnd.openxmlformats-officedocument.wordprocessingml.template;\<br />
application\/vnd.openxmlformats-officedocument.wordprocessingml.document;"<br />
<br />
##mime<br />
sed -i '3,31d' $pkgdir/usr/share/mime/packages/wps-office-et.xml<br />
sed -i '3,36d' $pkgdir/usr/share/mime/packages/wps-office-wpp.xml<br />
sed -i '3,30d' $pkgdir/usr/share/mime/packages/wps-office-wps.xml<br />
<br />
##desktop<br />
#_et<br />
sed -i "s/^MimeType.*$/$_etMT/" $pkgdir/usr/share/applications/wps-office-et.desktop<br />
#_wpp<br />
sed -i "s/^MimeType.*$/$_wppMT/" $pkgdir/usr/share/applications/wps-office-wpp.desktop<br />
#_wps<br />
sed -i "s/^MimeType.*$/$_wpsMT/" $pkgdir/usr/share/applications/wps-office-wps.desktop<br />
}}<br />
<br />
=== 使用 GTK+ UI ===<br />
<br />
WPS 默认的 UI 为 Qt,事实上其捆绑的 Qt 为 4.7.4,从而因为版本不符,无法正常加载 qtcurve 之类的主题。但我们可以改为 GTK+,直接加上参数 {{Ic|-style gtk+}} 即可。<br />
<br />
可以修改{{Ic|/usr/share/applications/wps-office-{wps,wpp,et}.desktop}}一劳永逸设定:<br />
<br />
Exec=/usr/bin/{wps,wpp,et} '''-style gtk+''' %f<br />
<br />
== 疑难解答 ==<br />
<br />
=== Office WPS for Linux 的启动命令是什么 ===<br />
<br />
{{ic|wps}}、{{ic|et}}、{{ic|wpp}} 分别为启动 WPS 文字、WPS 表格、WPP 演示的命令。<br />
<br />
=== Zip 模板压缩包乱码 ===<br />
请先安装 {{AUR|unzip-iconv}},解压时用参数 {{ic|-O gb18030}} 即可。<br />
<br />
=== 公式无法正常显示 ===<br />
大部分数学公式的正常显示需要以下字体:<br />
symbol.ttf webdings.ttf wingding.ttf wingdng2.ttf wingdng3.ttf monotypesorts.ttf MTExtra.ttf<br />
[[AUR (简体中文)]] 中的 {{AUR|ttf-wps-fonts}} 包含了除monotypesorts.ttf之外的字体,直接安装即可。</div>Recolic