https://wiki.archlinux.org/api.php?action=feedcontributions&user=IronOrion&feedformat=atomArchWiki - User contributions [en]2024-03-28T16:55:47ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Multiboot_USB_drive&diff=491987Multiboot USB drive2017-09-30T23:51:59Z<p>IronOrion: Added information about multibootusb from multibootusb.org</p>
<hr />
<div>[[Category:Boot process]]<br />
[[Category:Getting and installing Arch]]<br />
[[de:Multiboot USB Stick]]<br />
[[ja:マルチブート USB ドライブ]]<br />
{{Related articles start}}<br />
{{Related|GRUB}}<br />
{{Related|Syslinux}}<br />
{{Related|Archiso}}<br />
{{Related articles end}}<br />
{{Move|Multiboot disk images|See discussion|section=Scope and title}}<br />
A multiboot USB flash drive allows booting multiple ISO files from a single device. The ISO files can be copied to the device and booted directly without unpacking them first. There are multiple methods available, but they may not work for all ISO images.<br />
<br />
== Using GRUB and loopback devices ==<br />
<br />
{{Poor writing|multiple [[Help:Style|style]] issues}}<br />
<br />
Advantages:<br />
* only a single partition required<br />
* all ISO files are found in one directory<br />
* adding and removing ISO files is simple<br />
<br />
Disadvantages:<br />
* not all ISO images are compatible<br />
* the original boot menu for the ISO file is not shown<br />
* it can be difficult to find a working boot entry<br />
<br />
=== Preparation ===<br />
<br />
{{Expansion|How much extra space is needed for the bootloader?}}<br />
<br />
Create at least one partition and a filesystem supported by [[GRUB]] on the USB drive. See [[Partitioning]] and [[File systems#Create a file system]]. Choose the size based on the total size of the ISO files that you want to store on the drive, and plan for extra space for the bootloader.<br />
<br />
=== Installing GRUB ===<br />
<br />
==== Simple installation ====<br />
<br />
Mount the filesystem located on the USB drive:<br />
<br />
# mount /dev/sdXY /mnt<br />
<br />
Create the directory /boot:<br />
<br />
# mkdir /mnt/boot<br />
<br />
Install GRUB on the USB drive:<br />
<br />
# grub-install --target=i386-pc --recheck --boot-directory=/mnt/boot /dev/sdX<br />
<br />
In case you want to boot ISOs in UEFI mode, you have to install grub for the UEFI target:<br />
<br />
# grub-install --target x86_64-efi --efi-directory /mnt --boot-directory=/mnt/boot --removable<br />
<br />
For UEFI, the partition has to be the first one in an MBR partition table and formatted with FAT32.<br />
<br />
==== Hybrid UEFI GPT + BIOS GPT/MBR boot ====<br />
This configuration is useful for creating an universal USB key, bootable everywhere.<br />
First of all you must create a [[GPT]] partition table on your device. You need at least 3 partitions:<br />
# A BIOS boot partition (type EF02)<br />
# An EFI System partition (type EF00 with a [[EFI_System_Partition#Format_the_partition|FAT32 filesystem]])<br />
# Your data partition (use a filesystem supported by [[GRUB]])<br />
<br />
The BIOS boot partition must be sized 1 MB, while the EFI System partition can be at least as small as 50 MB. The data partition can take up the rest of the space of your drive.<br />
<br />
Next you must create a hybrid MBR partition table, as setting the boot flag on the protective MBR partition might not be enough.<br />
<br />
Hybrid MBR partition table creation example using gdisk:<br />
<br />
{{bc|<br />
# gdisk /dev/sdX<br />
<br />
Command (? for help): r<br />
Recovery/transformation command (? for help): h<br />
Type from one to three GPT partition numbers, separated by spaces, to be added to the hybrid MBR, in sequence: 1 2 3<br />
Place EFI GPT (0xEE) partition first in MBR (good for GRUB)? (Y/N): N<br />
<br />
Creating entry for GPT partition #1 (MBR partition #2)<br />
Enter an MBR hex code (default EF): <br />
Set the bootable flag? (Y/N): N<br />
<br />
Creating entry for GPT partition #2 (MBR partition #3)<br />
Enter an MBR hex code (default EF): <br />
Set the bootable flag? (Y/N): N<br />
<br />
Creating entry for GPT partition #3 (MBR partition #4)<br />
Enter an MBR hex code (default 83): <br />
Set the bootable flag? (Y/N): Y<br />
<br />
Recovery/transformation command (? for help): x<br />
Expert command (? for help): h<br />
Expert command (? for help): w<br />
<br />
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING<br />
PARTITIONS!!<br />
<br />
Do you want to proceed? (Y/N): Y<br />
}}<br />
<br />
You can now install GRUB to support both EFI + GPT and BIOS + GPT/MBR. The GRUB configuration (--boot-directory) can be kept in the same place.<br />
<br />
First, you need to mount the EFI System partition and the data partition of your USB drive. Then, you can install GRUB for EFI with:<br />
# grub-install --target=x86_64-efi --efi-directory=/EFI_MOUNTPOINT --boot-directory=/DATA_MOUNTPOINT/boot --removable --recheck<br />
<br />
And for BIOS with:<br />
# grub-install --target=i386-pc --boot-directory=/DATA_MOUNTPOINT/boot --recheck /dev/sdX<br />
<br />
As an additional fallback, you can also install GRUB on your MBR-bootable data partition:<br />
# grub-install --target=i386-pc --boot-directory=/DATA_MOUNTPOINT/boot --recheck /dev/sdX3<br />
<br />
=== Configuring GRUB ===<br />
<br />
==== Using a template ====<br />
There are some git projects which provide some pre-existing GRUB configuration files, and a nice generic {{ic|grub.cfg}} which can be used to load the other boot entries on demand, showing them only if the specified ISO files - or folders containing them - are present on the drive.<br />
<br />
Multiboot USB: https://github.com/aguslr/multibootusb<br />
<br />
GLIM (GRUB2 Live ISO Multiboot): https://github.com/thias/glim<br />
<br />
==== Manual configuration ====<br />
<br />
For the purpose of multiboot USB drive it is easier to edit {{ic|grub.cfg}} by hand instead of generating it. Alternatively, make the following changes in {{ic|/etc/grub.d/40_custom}} or {{ic|/mnt/boot/grub/custom.cfg}} and generate {{ic|/mnt/boot/grub/grub.cfg}} using [[GRUB#Generate the main configuration file|grub-mkconfig]].<br />
<br />
As it is recommend to use a [[Persistent block device naming|persistent name]] instead of {{ic|/dev/sd''xY''}} to identify the partition on the USB drive where the image files are located, define a variable for convenience to hold the value. If the ISO images are on the same partition as GRUB, use the following to read the UUID at boot time:<br />
<br />
{{hc|/mnt/boot/grub/grub.cfg|2=<br />
# path to the partition holding ISO images (using UUID)<br />
probe -u $root --set=rootuuid<br />
set imgdevpath="/dev/disk/by-uuid/$rootuuid"<br />
}}<br />
<br />
Or specify the UUID explicitly:<br />
<br />
{{hc|/mnt/boot/grub/grub.cfg|2=<br />
# path to the partition holding ISO images (using UUID)<br />
set imgdevpath="/dev/disk/by-uuid/''UUID_value''"<br />
}}<br />
<br />
Alternatively, use the device label instead of UUID:<br />
<br />
{{hc|/mnt/boot/grub/grub.cfg|2=<br />
# path to the partition holding ISO images (using labels)<br />
set imgdevpath="/dev/disk/by-label/''label_value''"<br />
}}<br />
<br />
The necessary UUID or label can be found using {{ic|lsblk -f}}. Do not use the same label as the Arch ISO for the USB device, otherwise the boot process will fail.<br />
<br />
To complete the configuration, a boot entry for each ISO image has to be added below this header, see the next section for examples.<br />
<br />
=== Boot entries ===<br />
<br />
It is assumed that the ISO images are stored in the {{ic|boot/iso/}} directory on the same filesystem where GRUB is installed. Otherwise it would be necessary to prefix the path to ISO file with device identification when using the {{ic|loopback}} command, for example {{ic|loopback loop '''(hd1,2)'''$isofile}}. As this identification of devices is not [[Persistent block device naming|persistent]], it is not used in the examples in this section.<br />
<br />
One can use persistent block device naming like so. Replace the UUID according to your ISO filesystem UUID.<br />
{{bc|1=<br />
# define globally (i.e outside any menuentry)<br />
insmod search_fs_uuid<br />
search --no-floppy --set='''isopart''' --fs-uuid ''123-456''<br />
# later use inside each menuentry instead<br />
loopback loop '''($isopart)'''$isofile<br />
}}<br />
<br />
{{Tip| For a list of kernel parameters, see [https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.rst kernel-parameters.rst] and [https://www.kernel.org/doc/Documentation/admin-guide/kernel-parameters.txt kernel-parameters.txt] (still incomplete). For more examples of boot entries, see the [https://www.gnu.org/software/grub/manual/grub.html#Multi_002dboot-manual-config GRUB upstream documentation] or the documentation for the distribution you wish to boot.}}<br />
<br />
==== Arch Linux monthly release ====<br />
Also see [[archiso]].<br />
<br />
{{bc|1=<br />
menuentry '[loopback]archlinux-2017.04.01-x86_64.iso' {<br />
set isofile='/boot/iso/archlinux-2017.04.01-x86_64.iso'<br />
loopback loop $isofile<br />
linux (loop)/arch/boot/'''x86_64'''/vmlinuz archisodevice=/dev/loop0 img_dev=$imgdevpath img_loop=$isofile earlymodules=loop<br />
initrd (loop)/arch/boot/'''x86_64'''/archiso.img<br />
}<br />
}}<br />
<br />
{{note|As of archiso v23 (monthly release 2015.10.01), the parameter {{ic|1=archisodevice=/dev/loop0}} is no longer necessary when boot using GRUB and loopback devices.}}<br />
<br />
==== archboot ====<br />
Also see [[archboot]].<br />
<br />
{{bc|1=<br />
menuentry '[loopback]archlinux-2014.11-1-archboot' {<br />
set isofile='/boot/iso/archlinux-2014.11-1-archboot.iso'<br />
loopback loop $isofile<br />
linux (loop)/boot/vmlinuz_'''x86_64''' iso_loop_dev=$imgdevpath iso_loop_path=$isofile<br />
initrd (loop)/boot/initramfs_'''x86_64'''.img<br />
}<br />
}}<br />
<br />
== Using Syslinux and memdisk ==<br />
<br />
Using the [http://www.syslinux.org/wiki/index.php/MEMDISK memdisk] module, the ISO image is loaded into memory, and its bootloader is loaded. Make sure that the system that will boot this USB drive has sufficient amount of memory for the image file and running operating system.<br />
<br />
=== Preparation ===<br />
<br />
Make sure that the USB drive is properly [[Partitioning|partitioned]] and that there is a partition with [[file system]] supported by Syslinux, for example fat32 or ext4. Then install Syslinux to this partition, see [[Syslinux#Installation]]{{Broken section link}}.<br />
<br />
=== Install the memdisk module ===<br />
<br />
The memdisk module was not installed during Syslinux installation, it has to be installed manually. Mount the partition where Syslinux is installed to {{ic|/mnt/}} and copy the memdisk module to the same directory where Syslinux is installed:<br />
<br />
# cp /usr/lib/syslinux/bios/memdisk /mnt/boot/syslinux/<br />
<br />
=== Configuration ===<br />
<br />
After copying the ISO files on the USB drive, edit the [[Syslinux#Configuration|Syslinux configuration file]] and create menu entries for the ISO images. The basic entry looks like this:<br />
<br />
{{hc|boot/syslinux/syslinux.cfg|<br />
LABEL ''some_label''<br />
LINUX memdisk<br />
INITRD ''/path/to/image.iso''<br />
APPEND iso<br />
}}<br />
<br />
See [http://www.syslinux.org/wiki/index.php/MEMDISK memdisk on Syslinux wiki] for more configuration options.<br />
<br />
=== Caveat for 32-bit systems ===<br />
<br />
When booting a 32-bit system from an image larger than 128MiB, it is necessary to increase the maximum memory usage of vmalloc. This is done by adding {{ic|1=vmalloc=''value''M}} to the kernel parameters, where {{ic|''value''}} is larger than the size of the ISO image in MiB.[http://www.syslinux.org/wiki/index.php/MEMDISK#-_memdiskfind_in_combination_with_phram_and_mtdblock]<br />
<br />
For example when booting the 32-bit system from the [https://www.archlinux.org/download/ Arch installation ISO], press the {{ic|Tab}} key over the {{ic|Boot Arch Linux (i686)}} entry and add {{ic|1=vmalloc=768M}} at the end. Skipping this step will result in the following error during boot:<br />
<br />
modprobe: ERROR: could not insert 'phram': Input/output error<br />
<br />
== Using MultiBootUSB ==<br />
<br />
MultiBootUSB is a cross platform software written in python which allows you to install multiple live linux on a USB disk non destructively and option to uninstall distros. Try out the world's first true cross platform multi boot live usb creator for free.<br />
<br />
http://multibootusb.org/<br />
<br />
=== Installation ===<br />
<br />
The package {{AUR|multibootusb}} can be installed from the AUR. It includes both graphical and command line interfaces.<br />
<br />
== See also ==<br />
<br />
* GRUB:<br />
** https://help.ubuntu.com/community/Grub2/ISOBoot/Examples<br />
** https://help.ubuntu.com/community/Grub2/ISOBoot<br />
* Syslinux:<br />
** [http://www.syslinux.org/wiki/index.php?title=Boot_an_Iso_image Boot an ISO image]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=UDF&diff=491986UDF2017-09-30T23:42:10Z<p>IronOrion: Created UDF page with minimal information</p>
<hr />
<div>[[Category:File systems]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related articles end}}<br />
<br />
Universal Disk Format (UDF) is a profile of the specification known as ISO/IEC 13346 and ECMA-167[5] and is an open vendor-neutral file system for computer data storage for a broad range of media.<br />
<br />
== Installation ==<br />
<br />
The tools to manage UDF partions are in the {{Pkg|udftools}} package, which can be found in the community repository.<br />
<br />
<br />
== See also ==<br />
<br />
* [[wikipedia:Universal Disk Format|UDF Wikipedia Entry]]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=File_systems&diff=491985File systems2017-09-30T23:37:29Z<p>IronOrion: /* Types of file systems */ Added UDF to the table of filesystems</p>
<hr />
<div>[[Category:File systems]]<br />
[[es:File systems]]<br />
[[hu:File systems]]<br />
[[it:File systems]]<br />
[[ja:ファイルシステム]]<br />
[[pl:File systems]]<br />
[[ru:File systems]]<br />
[[zh-hans:File systems]]<br />
{{Related articles start}}<br />
{{Related|Core utilities#lsblk}}<br />
{{Related|File permissions and attributes}}<br />
{{Related|fsck}}<br />
{{Related|fstab}}<br />
{{Related|List of applications/Internet#Distributed file systems}}<br />
{{Related|List of applications#Mount tools}}<br />
{{Related|Optical disc drive}}<br />
{{Related|Partitioning}}<br />
{{Related|NFS}}<br />
{{Related|NTFS-3G}}<br />
{{Related|FAT}}<br />
{{Related|QEMU#Mounting a partition inside a raw disk image}}<br />
{{Related|Samba}}<br />
{{Related|tmpfs}}<br />
{{Related|udev}}<br />
{{Related|udisks}}<br />
{{Related|umask}}<br />
{{Related|USB storage devices}}<br />
<br />
{{Related articles end}}<br />
<br />
From [[Wikipedia:File system|Wikipedia]]:<br />
:In computing, a file system (or filesystem) is used to control how data is stored and retrieved. Without a file system, information placed in a storage medium would be one large body of data with no way to tell where one piece of information stops and the next begins. By separating the data into pieces and giving each piece a name, the information is easily isolated and identified.<br />
:Taking its name from the way paper-based information systems are named, each group of data is called a "file". The structure and logic rules used to manage the groups of information and their names is called a "file system".<br />
<br />
Individual drive partitions can be setup using one of the many different available filesystems. Each has its own advantages, disadvantages, and unique idiosyncrasies. A brief overview of supported filesystems follows; the links are to Wikipedia pages that provide much more information.<br />
<br />
== Types of file systems ==<br />
<br />
See {{man|5|filesystems}} for a general overview, and [[Wikipedia:Comparison of file systems]] for a detailed feature comparison. File systems supported by the kernel are listed in {{ic|/proc/filesystems}}.<br />
<br />
{| class="wikitable sortable"<br />
! File system<br />
! Creation command<br />
! Userspace utilities<br />
! [[Archiso]] [https://git.archlinux.org/archiso.git/tree/configs/releng/packages.both]<br />
! Kernel documentation [https://www.kernel.org/doc/Documentation/filesystems/]<br />
! Notes<br />
|-<br />
| [[Btrfs]]<br />
| {{man|8|mkfs.btrfs}}<br />
| {{Pkg|btrfs-progs}}<br />
| {{Yes}}<br />
| [https://www.kernel.org/doc/Documentation/filesystems/btrfs.txt btrfs.txt]<br />
| [https://btrfs.wiki.kernel.org/index.php/Status Stability status]<br />
|-<br />
| [[VFAT]]<br />
| {{man|8|mkfs.vfat}}<br />
| {{Pkg|dosfstools}}<br />
| {{Yes}}<br />
| [https://www.kernel.org/doc/Documentation/filesystems/vfat.txt vfat.txt]<br />
|<br />
|-<br />
| [[w:exFAT|exFAT]]<br />
| {{man|8|mkfs.exfat|url=}}<br />
| {{Pkg|exfat-utils}}<br />
| {{Y|Optional}}<br />
| N/A (FUSE-based)<br />
|<br />
|-<br />
| [[F2FS]]<br />
| {{man|8|mkfs.f2fs}}<br />
| {{Pkg|f2fs-tools}}<br />
| {{Yes}}<br />
| [https://www.kernel.org/doc/Documentation/filesystems/f2fs.txt f2fs.txt]<br />
<br />
| Flash-based devices<br />
|-<br />
| [[ext3]]<br />
| {{man|8|mke2fs}}<br />
| {{Pkg|e2fsprogs}}<br />
| {{Yes}} ({{Grp|base}})<br />
| [https://www.kernel.org/doc/Documentation/filesystems/ext3.txt ext3.txt]<br />
|<br />
|-<br />
| [[ext4]]<br />
| {{man|8|mke2fs}}<br />
| {{Pkg|e2fsprogs}}<br />
| {{Yes}} ({{Grp|base}})<br />
| [https://www.kernel.org/doc/Documentation/filesystems/ext4.txt ext4.txt]<br />
|<br />
|-<br />
| [[w:Hierarchical_File_System|HFS]]<br />
| {{man|8|mkfs.hfsplus}}<br />
| {{Pkg|hfsprogs}}<br />
| {{Y|Optional}}<br />
| [https://www.kernel.org/doc/Documentation/filesystems/hfs.txt hfs.txt]<br />
| [[w:macOS|macOS]] file system<br />
|-<br />
| [[JFS]]<br />
| {{man|8|mkfs.jfs}}<br />
| {{Pkg|jfsutils}}<br />
| {{Yes}} ({{Grp|base}})<br />
| [https://www.kernel.org/doc/Documentation/filesystems/jfs.txt jfs.txt]<br />
|<br />
|-<br />
| [[Wikipedia:NILFS|NILFS2]]<br />
| {{man|8|mkfs.nilfs2}}<br />
| {{Pkg|nilfs-utils}}<br />
| {{Yes}}<br />
| [https://www.kernel.org/doc/Documentation/filesystems/nilfs2.txt nilfs2.txt]<br />
|<br />
|-<br />
| [[NTFS]]<br />
| {{man|8|mkfs.ntfs}}<br />
| {{Pkg|ntfs-3g}}<br />
| {{Yes}}<br />
| N/A (FUSE-based)<br />
| [[w:Microsoft_Windows|Windows]] file system<br />
|-<br />
| [[Reiser4]]<br />
| {{man|8|mkfs.reiser4|url=}}<br />
| {{AUR|reiser4progs}}<br />
| {{No}}<br />
|<br />
|<br />
|-<br />
| [[w:ReiserFS|ReiserFS]]<br />
| {{man|8|mkfs.reiserfs}}<br />
| {{Pkg|reiserfsprogs}}<br />
| {{Yes}} ({{Grp|base}})<br />
|<br />
|<br />
|-<br />
| [[XFS]]<br />
| {{man|8|mkfs.xfs}}<br />
| {{Pkg|xfsprogs}}<br />
| {{Yes}} ({{Grp|base}})<br />
|<br />
[https://www.kernel.org/doc/Documentation/filesystems/xfs.txt xfs.txt]<br><br />
[https://www.kernel.org/doc/Documentation/filesystems/xfs-delayed-logging-design.txt xfs-delayed-logging-design.txt]<br><br />
[https://www.kernel.org/doc/Documentation/filesystems/xfs-self-describing-metadata.txt xfs-self-describing-metadata.txt]<br />
|<br />
|-<br />
| [[ZFS]]<br />
| <br />
| {{AUR|zfs-linux}}<br />
| {{No}}<br />
| N/A ([[w:OpenZFS|OpenZFS]] port)<br />
|<br />
|-<br />
| [[UDF]]<br />
| {{man|8|mkfs.udf}}<br />
| {{Pkg|udftools}}<br />
| {{No}}<br />
|<br />
|}<br />
<br />
{{Note|The kernel has its own NTFS driver (see [https://www.kernel.org/doc/Documentation/filesystems/ntfs.txt ntfs.txt]), but it has limited support for writing files.}}<br />
<br />
=== Journaling ===<br />
<br />
All the above filesystems with the exception of ext2, FAT16/32, Btrfs and ZFS, use [[Wikipedia:Journaling_file_system|journaling]]. Journaling provides fault-resilience by logging changes before they are committed to the filesystem. In the event of a system crash or power failure, such file systems are faster to bring back online and less likely to become corrupted. The logging takes place in a dedicated area of the filesystem.<br />
<br />
Not all journaling techniques are the same. Ext3 and ext4 offer data-mode journaling, which logs both data and meta-data, as well as possibility to journal only meta-data changes. Data-mode journaling comes with a speed penalty and is not enabled by default. In the same vein, [[Reiser4]] offers so-called [https://reiser4.wiki.kernel.org/index.php/Reiser4_transaction_models "transaction models"], which include pure journaling (equivalent to ext4's data-mode journaling), pure Copy-on-Write approach (equivalent to btrfs' default) and a combined approach which heuristically alternates between the two former.<br />
<br />
{{Note|Reiser4 does not provide an equivalent to ext4's default journaling behavior (meta-data only).}}<br />
<br />
The other filesystems provide ordered-mode journaling, which only logs meta-data. While all journaling will return a filesystem to a valid state after a crash, data-mode journaling offers the greatest protection against corruption and data loss. There is a compromise in system performance, however, because data-mode journaling does two write operations: first to the journal and then to the disk. The trade-off between system speed and data safety should be considered when choosing the filesystem type.<br />
<br />
Filesystems based on copy-on-write, such as Btrfs and ZFS, have no need to use traditional journal to protect metadata, because they are never updated in-place. Although Btrfs still has a journal-like log tree, it is only used to speed-up fdatasync/fsync.<br />
<br />
=== FUSE-based file systems ===<br />
<br />
[[Wikipedia:Filesystem in Userspace|Filesystem in Userspace]] (FUSE) is a mechanism for Unix-like operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in ''user space'', while the FUSE kernel module provides only a "bridge" to the actual kernel interfaces.<br />
<br />
Some FUSE-based file systems:<br />
<br />
* {{App|adbfs-git|Mount an Android device connected via USB.|http://collectskin.com/adbfs/|{{AUR|adbfs-git}}}}<br />
* {{App|[[EncFS]]|EncFS is a userspace stackable cryptographic file-system.|https://vgough.github.io/encfs/|{{Pkg|encfs}}}}<br />
* {{App|fuseiso|Mount an ISO as a regular user.|http://sourceforge.net/projects/fuseiso/|{{Pkg|fuseiso}}}}<br />
* {{App|[[gitfs]]|gitfs is a FUSE file system that fully integrates with git.|https://www.presslabs.com/gitfs/|{{Aur|gitfs}}}}<br />
* {{App|[[gocryptfs]]|gocryptfs is a userspace stackable cryptographic file-system.|https://nuetzlich.net/gocryptfs/|{{Aur|gocryptfs}}}}<br />
* {{App|xbfuse-git|Mount an Xbox (360) ISO.|http://multimedia.cx/xbfuse/|{{AUR|xbfuse-git}}}}<br />
* {{App|xmlfs|Represent an XML file as a directory structure for easy access.|https://github.com/halhen/xmlfs|{{AUR|xmlfs}}}}<br />
* {{App|vdfuse|Mounting VirtualBox disk images (VDI/VMDK/VHD).|https://github.com/muflone/virtualbox-includes|{{AUR|vdfuse}}}}<br />
<br />
See [[Wikipedia:Filesystem in Userspace#Example uses]] for more.<br />
<br />
=== Stackable file systems ===<br />
<br />
* {{App|aufs|Advanced Multi-layered Unification Filesystem, a FUSE based union filesystem, a complete rewrite of Unionfs, was rejected from Linux mainline and instead OverlayFS was merged into the Linux Kernel.|http://aufs.sourceforge.net|{{AUR|aufs}}}}<br />
<br />
* {{App|[[eCryptfs]]|The Enterprise Cryptographic Filesystem is a package of disk encryption software for Linux. It is implemented as a POSIX-compliant filesystem-level encryption layer, aiming to offer functionality similar to that of GnuPG at the operating system level.|http://ecryptfs.org|{{Pkg|ecryptfs-utils}}}}<br />
<br />
* {{App|mergerfs|a FUSE based union filesystem.|https://github.com/trapexit/mergerfs|{{AUR|mergerfs}}}}<br />
<br />
* {{App|mhddfs|Multi-HDD FUSE filesystem, a FUSE based union filesystem.|http://mhddfs.uvw.ru|{{AUR|mhddfs}}}}<br />
<br />
* {{App|[[overlayfs]]|OverlayFS is a filesystem service for Linux which implements a union mount for other file systems.|https://www.kernel.org/doc/Documentation/filesystems/overlayfs.txt|{{Pkg|linux}}}}<br />
<br />
* {{App|Unionfs|Unionfs is a filesystem service for Linux, FreeBSD and NetBSD which implements a union mount for other file systems.|http://unionfs.filesystems.org/}}<br />
<br />
* {{App|unionfs-fuse|A user space Unionfs implementation.|https://github.com/rpodgorny/unionfs-fuse|{{Pkg|unionfs-fuse}}}}<br />
<br />
=== Read-only file systems ===<br />
<br />
* {{App|[[Wikipedia: SquashFS|SquashFS]]|SquashFS is a compressed read only filesystem. SquashFS compresses files, inodes and directories, and supports block sizes up to 1 MB for greater compression.|http://squashfs.sourceforge.net/|{{Pkg|squashfs-tools}}}}<br />
<br />
=== Clustered file systems ===<br />
<br />
* {{App|[[Ceph]]|Unified, distributed storage system designed for excellent performance, reliability and scalability.|https://ceph.com/|{{pkg|ceph}}}}<br />
* {{App|[[Glusterfs]]|Cluster file system capable of scaling to several peta-bytes.|https://www.gluster.org/|{{Pkg|glusterfs}}}}<br />
* {{App|[[IPFS]]|A peer-to-peer hypermedia protocol to make the web faster, safer, and more open. IPFS aims replace HTTP and build a better web for all of us. Uses blocks to store parts of a file, each network node stores only content it is interested, provides deduplication, distribution, scalable system limited only by users. (currently in aplha)|https://ipfs.io/|{{Pkg|go-ipfs}}}}<br />
* {{App|[[Wikipedia: MooseFS|MooseFS]]|MooseFS is a fault tolerant, highly available and high performance scale-out network distributed file system.|https://www.gluster.org/|{{Pkg|moosefs}}}}<br />
* {{App|[[OpenAFS]]|Open source implementation of the AFS distributed file system|http://www.openafs.org|{{AUR|openafs}}}}<br />
* {{App|[[Wikipedia: OrangeFS|OrangeFS]]|OrangeFS is a scale-out network file system designed for transparently accessing multi-server-based disk storage, in parallel. Has optimized MPI-IO support for parallel and distributed applications. Simplifies the use of parallel storage not only for Linux clients, but also for Windows, Hadoop, and WebDAV. POSIX-compatible. Part of Linux kernel since version 4.6. |http://www.orangefs.org/}}<br />
* {{App|Sheepdog|Distributed object storage system for volume and container services and manages the disks and nodes intelligently.|https://sheepdog.github.io/sheepdog/}}<br />
* {{App|[[Wikipedia:Tahoe-LAFS|Tahoe-LAFS]]|Tahoe Least-Authority Filesystem is a free and open, secure, decentralized, fault-tolerant, peer-to-peer distributed data store and distributed file system.<br />
|https://tahoe-lafs.org/|{{AUR|tahoe-lafs}}}}<br />
<br />
== Identify existing file systems ==<br />
<br />
To identify existing file systems, you can use [[lsblk]]:<br />
<br />
{{hc|1=$ lsblk -f|2=<br />
NAME FSTYPE LABEL UUID MOUNTPOINT<br />
sdb <br />
└─sdb1 vfat Transcend 4A3C-A9E9 <br />
}}<br />
<br />
An existing file system, if present, will be shown in the {{ic|FSTYPE}} column. If [[mount]]ed, it will appear in the {{ic|MOUNTPOINT}} column.<br />
<br />
== Create a file system ==<br />
<br />
File systems are usually created on a [[partition]], inside logical containers such as [[LVM]], [[RAID]] and [[dm-crypt]], or on a regular file (see [[w:Loop device]]). This section describes the partition case.<br />
<br />
{{Note|1=File systems can be written directly to a disk, known as a [https://msdn.microsoft.com/en-us/library/windows/hardware/dn640535(v=vs.85).aspx#gpt_faq_superfloppy superfloppy] or ''partitionless disk''. Certain limitations are involved with this method, particularly if [[Arch boot process|booting]] from such a drive. See [[Btrfs#Partitionless Btrfs disk]] for an example.}}<br />
<br />
{{Warning|<br />
* After creating a new filesystem, data previously stored on this partition can unlikely be recovered. '''Create a backup of any data you want to keep'''.<br />
*The purpose of a given partition may restrict the choice of file system. For example, an [[EFI System Partition]] must contain a FAT32 ({{ic|mkfs.vfat}}) file system, and the file system containing the {{ic|/boot}} directory must be supported by the [[boot loader]].<br />
}}<br />
<br />
Before continuing, [[lsblk|identify the device]] where the file system will be created and whether or not it is mounted. For example:<br />
<br />
{{hc|$ lsblk -f|<br />
NAME FSTYPE LABEL UUID MOUNTPOINT<br />
sda<br />
├─sda1 C4DA-2C4D <br />
├─sda2 ext4 5b1564b2-2e2c-452c-bcfa-d1f572ae99f2 /mnt<br />
└─sda3 56adc99b-a61e-46af-aab7-a6d07e504652 <br />
}}<br />
<br />
Mounted file systems '''must''' be [[#Umount a file system|unmounted]] before proceeding. In the above example an existing filesystem is on {{ic|/dev/sda2}} and is mounted at {{ic|/mnt}}. It would be unmounted with:<br />
<br />
# umount /dev/sda2<br />
<br />
To find just mounted file systems, see [[#List mounted file systems]].<br />
<br />
To create a new file system, use {{man|8|mkfs}}. See [[#Types of file systems]] for the exact type, as well as userspace utilities you may wish to install for a particular file system.<br />
<br />
For example, to create a new file system of type [[ext4]] (common for Linux data partitions) on {{ic|/dev/sda1}}, run:<br />
<br />
# mkfs.ext4 /dev/sda1<br />
<br />
{{Tip|<br />
* Use the {{ic|-L}} flag of ''mkfs.ext4'' to specify a [[Persistent_block_device_naming#by-label|file system label]]. ''e2label'' can be used to change the label on an existing file system.<br />
* File systems may be ''resized'' after creation, with certain limitations. For example, an [[XFS]] filesystem's size can be increased, but it cannot reduced. See [[w:Comparison_of_file_systems#Resize_capabilities|Resize capabilities]] and the respective file system documentation for details.}}<br />
<br />
The new file system can now be mounted to a directory of choice.<br />
<br />
== Mount a file system ==<br />
<br />
To manually mount filesystem located on a device (e.g., a partition) to a directory, use {{man|8|mount}}. This example mounts {{ic|/dev/sda1}} to {{ic|/mnt}}.<br />
<br />
# mount /dev/sda1 /mnt<br />
<br />
This attaches the filesystem on {{ic|/dev/sda1}} at the directory {{ic|/mnt}}, making the contents of the filesystem visible. Any data that existed at {{ic|/mnt}} before this action is made invisible until the device is unmounted.<br />
<br />
[[fstab]] contains information on how devices should be automatically mounted if present. See the [[fstab]] article for more information on how to modify this behavior.<br />
<br />
If a device is specified in {{ic|/etc/fstab}} and only the device or mount point is given on the command line, that information will be used in mounting. For example, if {{ic|/etc/fstab}} contains a line indicating that {{ic|/dev/sda1}} should be mounted to {{ic|/mnt}}, then the following will automatically mount the device to that location:<br />
<br />
# mount /dev/sda1<br />
<br />
Or<br />
<br />
# mount /mnt<br />
<br />
''mount'' contains several options, many of which depend on the file system specified.<br />
The options can be changed, either by:<br />
* using flags on the command line with ''mount''<br />
* editing [[fstab]]<br />
* creating [[udev]] rules<br />
* [[Arch Build System|compiling the kernel yourself]]<br />
* or using filesystem-specific mount scripts (located at {{ic|/usr/bin/mount.*}}).<br />
<br />
See these related articles and the article of the filesystem of interest for more information.<br />
<br />
=== List mounted file systems ===<br />
<br />
To list all mounted file systems, use {{man|8|findmnt}}:<br />
<br />
$ findmnt<br />
<br />
''findmnt'' takes a variety of arguments which can filter the output and show additional information. For example, it can take a device or mount point as an argument to show only information on what is specified:<br />
<br />
$ findmnt /dev/sda1<br />
<br />
''findmnt'' gathers information from {{ic|/etc/fstab}}, {{ic|/etc/mtab}}, and {{ic|/proc/self/mounts}}.<br />
<br />
=== Umount a file system ===<br />
<br />
To unmount a file system use {{man|8|umount}}. Either the device containing the file system (e.g., {{ic|/dev/sda1}}) or the mount point (e.g., {{ic|/mnt}}) can be specified:<br />
<br />
# umount /dev/sda1<br />
<br />
Or<br />
<br />
# umount /mnt<br />
<br />
== See also ==<br />
<br />
* {{man|5|filesystems}}<br />
* [https://www.kernel.org/doc/Documentation/filesystems/ Documentation of file systems supported by linux]<br />
* [[Wikipedia:File systems]]<br />
* [[Wikipedia:Mount (Unix)]]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Laptop&diff=467672Laptop2017-02-04T23:45:47Z<p>IronOrion: /* Touchpad */ Updated the Touchpad Synaptics link to the libinput page</p>
<hr />
<div>[[Category:Laptops]]<br />
[[cs:Laptop]]<br />
[[es:Laptop]]<br />
[[it:Laptop]]<br />
[[ja:ノートパソコン]]<br />
[[ru:Laptop]]<br />
[[zh-hans:Laptop]]<br />
This '''Laptop main page''' contains links to article (sections) needed for configuring a laptop for the best experience. Setting up a laptop is in many ways the same as setting up a desktop. However, there are a few key differences. Arch Linux provides all the tools and programs necessary to take complete control of your laptop. These programs and utilities are highlighted below, with appropriate tips tutorials. <br />
<br />
To gain an overview of the reported/achieved Linux hardware compatibility of a particular laptop model, see the results per vendor of below subpages. <br />
{{Laptops navigation}}<br />
If there are laptop model specific instructions, the respective article is crosslinked in the first column of the vendor subpages. In case the model is not listed in the vendor table, existing instructions of similar models via the [[:Category:Laptops]] vendor subcategory may help. <br />
<br />
== Power management ==<br />
<br />
{{Note|You should read the main article [[Power management]]. Additional laptop-specific features are described below.}}<br />
<br />
Power management is very important for anyone who wishes to make good use of their battery capacity. The following tools and programs help to increase battery life and keep your laptop cool and quiet.<br />
<br />
=== Battery state ===<br />
<br />
Reading battery state can be done in multiple ways. Classical method is some daemon periodically polling battery level using ACPI interface. On some systems, the battery sends events to [[udev]] whenever it (dis)charges by 1%, this event can be connected to some action using a udev rule.<br />
<br />
==== ACPI ====<br />
<br />
Battery state can be read using ACPI utilities from the terminal. ACPI command line utilities are provided via the {{Pkg|acpi}} package. See [[ACPI modules]] for more information.<br />
<br />
* {{Pkg|cbatticon}} is a battery icon that sits in the system tray.<br />
* {{AUR|batterymon-clone}} is a battery monitor that sits in the system tray, similar to batti.<br />
* {{AUR|batify}} is a Bash script to set plug and battery level notifications using udev and libnotify (multi-xusers).<br />
<br />
==== hibernate on low battery level ====<br />
<br />
'''If''' your battery sends events to [[udev]] whenever it (dis)charges by 1%, you can use this udev rule to automatically hibernate the system when battery level is critical, and thus prevent all unsaved work from being lost.<br />
<br />
{{Note|Not all batteries report discharge events. Test by running {{ic|udevadm monitor --property}} while on battery and see if any events are reported. You should wait at least 1% drop. If no events are reported and {{ic|/sys/class/power_supply/BAT0/alarm}} is non-zero then the battery will likely trigger an event when {{ic|BAT0/energy_now}} drops below the alarm value, and the udev rule will work as long as the percentage math works out}}<br />
<br />
{{Note|This rule will be repeated whenever the condition is set. As such, when resuming from hibernate when the battery is critical, the computer will hibernate directly. Some laptops do not boot beyond a certain battery level, so the rule below could be adjusted accordingly.}}<br />
<br />
{{hc|/etc/udev/rules.d/99-lowbat.rules|<nowiki><br />
# Suspend the system when battery level drops to 5% or lower<br />
SUBSYSTEM=="power_supply", ATTR{status}=="Discharging", ATTR{capacity}=="[0-5]", RUN+="/usr/bin/systemctl hibernate"<br />
</nowiki>}}<br />
Batteries can jump to a lower value instead of discharging continuously, therefore a udev string matching pattern for all capacities 0 through 5 is used.<br />
<br />
Other rules can be added to perform different actions depending on power supply status and/or capacity.<br />
<br />
If your system has no or missing ACPI events, use [[cron]] with the following script:<br />
<br />
{{bc|<nowiki><br />
#!/bin/sh<br />
acpi -b | awk -F'[,:%]' '{print $2, $3}' | {<br />
read -r status capacity<br />
<br />
if [ "$status" = Discharging -a "$capacity" -lt 5 ]; then<br />
logger "Critical battery threshold"<br />
systemctl hibernate<br />
fi<br />
}</nowiki><br />
}}<br />
<br />
===== Testing events =====<br />
<br />
One way to test udev rules is to have them create a file when they are run. For example:<br />
<br />
{{hc|/etc/udev/rules.d/98-discharging.rules|<nowiki><br />
SUBSYSTEM=="power_supply", ATTR{status}=="Discharging", RUN+="/usr/bin/touch /home/example/discharging"<br />
</nowiki>}}<br />
<br />
This creates a file at {{ic|/home/example/discharging}} when the laptop charger is unplugged. You can test whether the rule worked by unplugging your laptop and looking for this file. For more advanced udev rule testing, see [[Udev#Testing rules before loading]].<br />
<br />
=== Suspend and Hibernate ===<br />
<br />
Manually suspending the operating system, either to memory (standby) or to disk (hibernate) sometimes provides the most efficient way to optimize battery life, depending on the usage pattern of the laptop.<br />
<br />
See the main article [[Suspend and hibernate]].<br />
<br />
=== Hard drive spin down problem ===<br />
<br />
Documented [https://bugs.launchpad.net/ubuntu/+source/acpi-support/+bug/59695 here].<br />
<br />
To prevent your laptop hard drive from spinning down too often, set less aggressive power management as described in [[hdparm#Power management configuration]]. Even the default values may be too aggressive.<br />
<br />
=== Modify wake events ===<br />
<br />
Events which cause the system to resume from [[w:Advanced_Configuration_and_Power_Interface#Power_states|power states]] can be regulated in {{ic|/proc/acpi/wakeup}}. Writing an entry from the ''Device'' column toggles the status from {{ic|enabled}} to {{ic|disabled}}, or vice-versa.<br />
<br />
For example, to disable waking from suspend (S3) on opening the lid, run:<br />
<br />
# echo LID > /proc/acpi/wakeup<br />
<br />
{{Accuracy|"Permanent toggling" is not the desired behaviour considering that ''systemd-tmpfiles'' can be run repeatedly.}}<br />
<br />
This change can be made permanent with {{man|5|tmpfiles.d}}:<br />
<br />
{{hc|/etc/tmpfiles.d/disable-lid-wakeup.conf|2=w /proc/acpi/wakeup - - - - LID}}<br />
<br />
== Hardware support ==<br />
<br />
=== Screen brightness ===<br />
<br />
See [[Backlight]].<br />
<br />
=== Touchpad ===<br />
<br />
To get your touchpad working properly, see the [[libinput]] page. [[Touchpad Synaptics]] is the older input driver, which is currently in maintenance mode and is no longer updated.<br />
<br />
=== Fingerprint Reader ===<br />
<br />
See [[Fingerprint-gui]], [[fprint]] and [[ThinkFinger]] (for ThinkPads).<br />
<br />
=== Webcam ===<br />
<br />
See [[Webcam setup]].<br />
<br />
=== Hard disk shock protection ===<br />
<br />
There are several laptops from different vendors featuring shock protection capabilities. As manufacturers have refused to support open source development of the required software components so far, Linux support for shock protection varies considerably between different hardware implementations.<br />
<br />
Currently, two projects, named [[HDAPS]] and [[Hpfall]] (available in the [[AUR]]), support this kind of protection. HDAPS is for IBM/Lenovo Thinkpads and hpfall for HP/Compaq laptops.<br />
<br />
=== Hybrid graphics ===<br />
<br />
The laptop manufacturers developed new technologies involving two graphic cards in an single computer, enabling both high performance and power saving usages. These laptops usually use an Intel chip for display by default, so an [[Intel graphics]] driver is needed first. Then you can [[Hybrid graphics|choose methods]] to utilize the second graphics chip.<br />
<br />
== Network time syncing ==<br />
<br />
For a laptop, it may be a good idea to use [[Chrony]] as an alternative to [[NTPd]], [[OpenNTPD]] or [[systemd-timesyncd]] to sync your clock over the network. Chrony is designed to work well even on systems with no permanent network connection (such as laptops), and is capable of much faster time synchronisation than standard ntp. Chrony has several advantages when used in systems running on virtual machines, such as a larger range for frequency correction to help correct quickly drifting clocks, and better response to rapid changes in the clock frequency. It also has a smaller memory footprint and no unnecessary process wakeups, improving power efficiency.<br />
<br />
== See also ==<br />
<br />
; General<br />
* [[CPU frequency scaling]] is a technology used primarily by notebooks which enables the OS to scale the CPU frequency up or down, depending on the current system load and/or power scheme.<br />
* [[Display Power Management Signaling]] describes how to automatically turn off the laptop screen after a specified interval of inactivity (not just blanked with a screensaver but completely shut off).<br />
* [[Wireless network configuration]] provides information about setting up wireless connection.<br />
* [[Extra keyboard keys]] describes configuration of Media keys.<br />
* [[acpid]] which is a flexible and extensible daemon for delivering ACPI events.<br />
<br />
; Pages specific to certain laptop types<br />
* See [[:Category:Laptops]] and its subcategories for pages dedicated to specific models/vendors.<br />
* Battery tweaks for ThinkPads can be found in [[TLP]] and the [[tp_smapi]] article.<br />
* [[Acer Aspire One#acerhdf|acerhdf]] is a kernel module for controlling fan speed on Acer Aspire One and some Packard Bell Notebooks.<br />
<br />
; External resources<br />
* [http://www.linux-on-laptops.com/ http://www.linux-on-laptops.com/]<br />
* [http://www.linlap.com/ http://www.linlap.com/]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Touchpad&diff=467671Touchpad2017-02-04T23:42:17Z<p>IronOrion: Changed Touchpad to redirect to Laptop#Touchpad as Synaptics is not the recommended default anymore</p>
<hr />
<div>#REDIRECT [[Laptop#Touchpad]]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Libinput&diff=467666Libinput2017-02-04T21:06:59Z<p>IronOrion: /* Troubleshooting */ Added i8042 kernel parameters for some laptop models which do not detect a touchpad without</p>
<hr />
<div>{{Lowercase title}}<br />
[[Category:Input devices]]<br />
[[ja:Libinput]]<br />
{{Related articles start}}<br />
{{Related|Xorg}}<br />
{{Related|Touchpad Synaptics}}<br />
{{Related|Wayland}}<br />
{{Related articles end}}<br />
<br />
From the [https://freedesktop.org/wiki/Software/libinput/ libinput] wiki page: <br />
<br />
:libinput is a library to handle input devices in Wayland compositors and to provide a generic X.Org input driver. It provides device detection, device handling, input device event processing and abstraction so minimize the amount of custom input code compositors need to provide the common set of functionality that users expect.<br />
<br />
The X.Org input driver supports most regular [[Xorg#Input devices]]. Particularly notable is the project's goal to provide advanced support for touch (multitouch and gesture) features of touchpads and touchscreens. See the [http://wayland.freedesktop.org/libinput/doc/latest/pages.html project documentation] for more information.<br />
<br />
== Installation ==<br />
<br />
If you wish to use libinput under [[Wayland]], there is nothing to do for installation. The {{pkg|libinput}} package should already be installed as a dependency of any graphical environment you use that has Wayland, and no additional driver is needed.<br />
<br />
If you wish to use libinput with [[Xorg]], [[install]] the {{Pkg|xf86-input-libinput}} package, which is "a thin wrapper around libinput and allows for libinput to be used for input devices in X. This driver can be used as as drop-in replacement for evdev and synaptics." [https://freedesktop.org/wiki/Software/libinput/] In other words, other packages used for input with X (i.e., those prefixed with {{ic|xf86-input-}}) can be replaced with this driver.<br />
<br />
You may also want to install {{Pkg|xorg-xinput}} to be able to change settings at runtime. <br />
<br />
== Configuration == <br />
<br />
For [[Wayland]], there is no libinput configuration file. The configurable options depend on the progress of your desktop environment's support for them; see [[#Graphical tools]].<br />
<br />
For [[Xorg]], a default configuration file for the wrapper is installed to {{ic|/usr/share/X11/xorg.conf.d/40-libinput.conf}}. No extra configuration is necessary for it to autodetect keyboards, touchpads, trackpointers and supported touchscreens.<br />
<br />
First, execute:<br />
# libinput-list-devices <br />
It will output the devices on the system and their respective features supported by libinput. <br />
<br />
After a [[restart]] of the graphical environment, the devices should be managed by libinput with default configuration, if no other drivers are configured to take precedence. <br />
<br />
See {{man|4|libinput|url=https://www.mankier.com/4/libinput}} for general options to set. The ''xinput'' tool is used to view or change options available for a particular device at runtime. For example: <br />
$ xinput list<br />
to view all devices and determine their numbers<br />
$ xinput list-props ''device-number'' <br />
to view and <br />
$ xinput set-prop ''device-number'' ''option-number'' ''setting'' <br />
to change a setting. <br />
<br />
See [[Xorg#Using .conf files]] for permanent option settings. [[Logitech Marble Mouse#Using libinput]] and [[#Button re-mapping]] illustrate examples. <br />
<br />
Alternative drivers for [[Xorg#Input devices]] can generally be installed in parallel. If you intend to switch driver for a device to use libinput, ensure no legacy configuration files {{ic|/etc/X11/xorg.conf.d/}} for other drivers take precedence. <br />
{{Tip|If you have libinput and synaptics installed in parallel with default configuration (i.e. no files in {{ic|/etc/X11/xorg.conf.d}} for both), synaptics will take precedence due to its {{ic|70-synaptics.conf}} file name. To avoid this, you can symlink the default libinput configuration: <br />
# ln -s /usr/share/X11/xorg.conf.d/40-libinput.conf /etc/X11/xorg.conf.d/40-libinput.conf<br />
If you ''do'' have {{ic|/etc/X11/xorg.conf.d/}} configuration files for both, the libinput file must be ordered second; see [[Xorg#Using .conf files]].}}<br />
<br />
One way to check which devices are managed by libinput is the [[Xorg#General|xorg logfile]]. For example, the following:<br />
<br />
{{hc|$ grep -e "Using input driver 'libinput'" ''/path/to/Xorg.0.log''|<br />
[ 28.799] (II) Using input driver 'libinput' for 'Power Button'<br />
[ 28.847] (II) Using input driver 'libinput' for 'Video Bus'<br />
[ 28.853] (II) Using input driver 'libinput' for 'Power Button'<br />
[ 28.860] (II) Using input driver 'libinput' for 'Sleep Button'<br />
[ 28.872] (II) Using input driver 'libinput' for 'AT Translated Set 2 keyboard'<br />
[ 28.878] (II) Using input driver 'libinput' for 'SynPS/2 Synaptics TouchPad'<br />
[ 28.886] (II) Using input driver 'libinput' for 'TPPS/2 IBM TrackPoint'<br />
[ 28.895] (II) Using input driver 'libinput' for 'ThinkPad Extra Buttons'}}<br />
<br />
is a notebook without any configuration files in {{ic|/etc/X11/xorg.conf.d/}}, i.e. devices are autodetected. <br />
<br />
Of course you can elect to use an alternative driver for one device and libinput for others. A number of factors may influence which driver to use. For example, in comparison to [[Touchpad Synaptics]] the libinput driver has fewer options to customize touchpad behaviour to one's own taste, but far more programmatic logic to process multitouch events (e.g. palm detection as well). Hence, it makes sense to try the alternative, if you are experiencing problems on your hardware with one driver or the other.<br />
<br />
=== Common options ===<br />
<br />
Custom configuration files should be placed in {{ic|/etc/X11/xorg.conf.d/}} and following a widely used naming schema {{ic|30-touchpad.conf}} is often chosen as filename.<br />
<br />
{{Tip|Have a look at {{ic|/usr/share/X11/xorg.conf.d/40-libinput.conf}} for guidance and refer to the {{man|4|libinput|url=https://www.mankier.com/4/libinput}} manual page for a detailed description of available configuration options.}}<br />
<br />
A basic configuration should have the following structure:<br />
{{hc|/etc/X11/xorg.conf.d/30-touchpad.conf|<br />
Section "InputClass"<br />
Identifier "devname"<br />
Driver "libinput"<br />
Option "Device" "devpath"<br />
...<br />
EndSection<br />
}}<br />
Where {{ic|devpath}} is the path to the device as given by {{ic|libinput-list-devices}}, eg: {{ic|/dev/input/event16}}. You may define as many sections as you like in a single configuration file.<br />
To configure the device of your choice specify a filter by using {{ic|MatchIsPointer "on"}}, {{ic|MatchIsKeyboard "on"}}, {{ic|MatchIsTouchpad "on"}} or {{ic|MatchIsTouchscreen "on"}} and add your desired option. See {{man|4|libinput|url=https://www.mankier.com/4/libinput}} for more details. Common options include:<br />
* {{ic|"Tapping" "on"}}: tapping a.k.a. tap-to-click<br />
* {{ic|"ClickMethod" "clickfinger"}}: trackpad no longer has middle and right button areas and instead two-finger click is a context click and three-finger click is a middle click, see the [https://wayland.freedesktop.org/libinput/doc/latest/clickpad_softbuttons.html#clickfinger docs].<br />
* {{ic|"NaturalScrolling" "true"}}: natural (reverse) scrolling<br />
* {{ic|"ScrollMethod" "edge"}}: edge (vertical) scrolling<br />
Bear in mind that some of them may only apply to certain devices.<br />
<br />
=== Graphical tools ===<br />
<br />
There are different GUI tools:<br />
<br />
* [[GNOME]]: <br />
** Control center has a basic UI. See [[GNOME#Mouse and touchpad]].<br />
* [[Cinnamon]]: <br />
** Similar to the GNOME UI, with more options.<br />
* [[KDE Plasma]] 5: <br />
** Basic options within Touchpad section (kcm_touchpad) in System Settings.<br />
** [https://github.com/amezin/pointing-devices-kcm pointing-devices-kcm] ({{AUR|kcm-pointing-devices-git}}) is a new and rewritten KCM for all input devices supported by libinput.<br />
<br />
== Tips and tricks ==<br />
<br />
=== Button re-mapping ===<br />
<br />
Swapping two- and three-finger tap for a touchpad is a straight forward example. Instead of the default three-finger tap for pasting you can configure two-finger tap pasting by setting the {{ic|TappingButtonMap}} option in your [[Xorg]] configuration file. To set 1/2/3-finger taps to left/right/middle set {{ic|TappingButtonMap}} to {{ic|lrm}}, for left/middle/right set it to {{ic|lmr}}.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/30-touchpad.conf|<br />
Section "InputClass"<br />
Identifier "touchpad"<br />
Driver "libinput"<br />
MatchIsTouchpad "on"<br />
Option "Tapping" "on"<br />
Option "TappingButtonMap" "lmr"<br />
EndSection}}<br />
<br />
Remember to remove {{ic|MatchIsTouchpad "on"}} if your device is not a touchpad and adjust the {{ic|Identifier}} accordingly.<br />
<br />
=== Manual button re-mapping ===<br />
<br />
For some devices it is desirable to change the button mapping. A common example is the use of a thumb button instead of the middle button (used in X11 for pasting) on mice where the middle button is part of the mouse wheel. You can query the current button mapping via:<br />
$ xinput get-button-map ''device''<br />
You can freely permutate the button numbers and write them back. Example:<br />
$ xinput set-button-map ''device'' 1 6 3 4 5 0 7<br />
In this example, we mapped button 6 to be the middle button and disabled the original middle button by assigning it to button 0. <br />
This may also be used for [[Wayland]], but be aware both the ''device'' number and its button-map will be different. Hence, settings are not directly interchangeable. <br />
<br />
{{Tip|You can use ''xev'' (from the {{Pkg|xorg-xev}} package) to find out which physical button is currently mapped to which ID.}}<br />
<br />
Some devices occur several times under the same device name, with a different amount of buttons exposed. The following is an example for reliably changing the button mapping for a Logitech Revolution MX mouse via [[xinitrc]]:<br />
<br />
{{hc|~/.xinitrc|<nowiki><br />
...<br />
for i in $(xinput list | grep "Logitech USB Receiver" | perl -n -e'/id=(\d+)/ && print "$1\n"')<br />
do if xinput get-button-map "$i" 2>/dev/null| grep -q 20; then<br />
xinput set-button-map "$i" 1 17 3 4 5 8 7 6 9 10 11 12 13 14 15 16 2 18 19 20<br />
fi<br />
done<br />
...</nowiki>}}<br />
<br />
=== Gestures ===<br />
<br />
While the libinput driver already contains logic to process advanced multitouch events like swipe and pinch [https://wayland.freedesktop.org/libinput/doc/latest/gestures.html gestures], the [[Desktop environment]] or [[Window manager]] might not have implemented actions for all of them yet. <br />
<br />
For [[w:Extended_Window_Manager_Hints|EWMH]] (see also [https://www.freedesktop.org/wiki/Specifications/wm-spec/ wm-spec]) compliant window managers, the [https://github.com/bulletmark/libinput-gestures libinput-gestures] utility can be used meanwhile. The program reads libinput gestures (through {{ic|libinput-debug-events}}) from the touchpad and maps them to gestures according to a configuration file. Hence, it offers some flexibility within the boundaries of libinput's built-in recognition.<br />
<br />
To use [https://github.com/bulletmark/libinput-gestures libinput-gestures], install the {{Aur|libinput-gestures}} package. You can use the default system-wide configured swipe and pinch gestures or define your own in a personal configuration file, see the [https://github.com/bulletmark/libinput-gestures/blob/master/README.md README] for details.<br />
<br />
== Troubleshooting ==<br />
First, see whether the packaged ''libinput-debug-events'' tool can support you in debugging the problem. Executing {{ic|libinput-debug-events --help}} shows options it covers.<br />
<br />
Some inputs require kernel support. The tool ''evemu-describe'' from the {{Pkg|evemu}} package can be used to check: <br />
<br />
Compare the output of [http://ix.io/m6b software supported input trackpad driver] with [https://github.com/whot/evemu-devices/blob/master/touchpads/SynPS2%20Synaptics%20TouchPad-with-scrollbuttons.events a supported trackpad]. i.e. a couple of ABS_ axes, a couple of ABS_MT axes and no REL_X/Y axis. For a clickpad the {{ic|INPUT_PROP_BUTTONPAD}} property should also be set, if it is supported.<br />
<br />
=== Touchpad not working in GNOME ===<br />
<br />
Ensure the touchpad events are being sent to the GNOME desktop by running the following command:<br />
$ gsettings set org.gnome.desktop.peripherals.touchpad send-events enabled<br />
<br />
Additionally, GNOME may override certain behaviors, like turning off Tapping and forcing Natural Scrolling. In this case the settings must be adapted using GNOMEs {{ic|gsettings}} command line tool or a graphical frontend of your choice. For example if you wish to enable ''Tapping'' and disable ''Natural Scrolling'' for your user, adjust the touchpad key-values like the following:<br />
$ gsettings set org.gnome.desktop.peripherals.touchpad tap-to-click true<br />
$ gsettings set org.gnome.desktop.peripherals.touchpad natural-scroll false<br />
<br />
=== Touchpad settings not taking effect in KDE's Touchpad KCM ===<br />
<br />
KDE's Touchpad KCM has libinput support for [[Xorg]], but not all GUI settings are available yet. You may find that a setting such as ''Disable touchpad when typing'' has no effect and other options are greyed out. Until the support is extended, a workaround is to set the options manually with {{ic|xinput set-prop}}.<br />
<br />
=== Touchpad not detected at all ===<br />
<br />
If a touchpad device is not detected and shown as a device at all, a possible solution might be using one or more of these kernel parameters.<br />
<br />
i8042.noloop i8042.nomux i8042.nopnp i8042.reset<br />
<br />
== See also == <br />
<br />
* [https://wayland.freedesktop.org/libinput/doc/latest/index.html libinput Wayland documentation]<br />
* [https://archive.fosdem.org/2015/schedule/event/libinput/attachments/slides/591/export/events/attachments/libinput/slides/591/libinput_xorg.pdf FOSDEM 2015 - libinput] - Hans de Goede on goals and plans of the project<br />
*[http://who-t.blogspot.com.au/ Peter Hutterer's Blog] - numerous posts on libinput from one of the project's hackers</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Virtual_Machine&diff=428172Virtual Machine2016-03-27T23:19:53Z<p>IronOrion: Redirected page to Libvirt</p>
<hr />
<div>#REDIRECT [[Libvirt]]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Virtualization&diff=428171Virtualization2016-03-27T23:17:57Z<p>IronOrion: Redirected page to Libvirt</p>
<hr />
<div>#REDIRECT [[Libvirt]]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Libvirt&diff=428169Libvirt2016-03-27T22:24:16Z<p>IronOrion: /* UEFI Support */ Fixed a mistake in adding qemu to install list</p>
<hr />
<div>{{DISPLAYTITLE:libvirt}}<br />
[[Category:Virtualization]]<br />
[[ja:libvirt]]<br />
[[zh-CN:Libvirt]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|:PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management. These software pieces include a long term stable C API, a daemon (libvirtd), and a command line utility (virsh). A primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors, such as the [[QEMU|KVM/QEMU]], [[Xen]], [[LXC]], [http://openvz.org OpenVZ] or [[VirtualBox]] [[:Category:Hypervisors|hypervisors]] ([http://libvirt.org/drivers.html among others]).<br />
<br />
Some of the major libvirt features are:<br />
*'''VM management''': Various domain lifecycle operations such as start, stop, pause, save, restore, and migrate. Hotplug operations for many device types including disk and network interfaces, memory, and cpus.<br />
*'''Remote machine support''': All libvirt functionality is accessible on any machine running the libvirt daemon, including remote machines. A variety of network transports are supported for connecting remotely, with the simplest being SSH, which requires no extra explicit configuration.<br />
*'''Storage management''': Any host running the libvirt daemon can be used to manage various types of storage: create file images of various formats (qcow2, vmdk, raw, ...), mount NFS shares, enumerate existing LVM volume groups, create new LVM volume groups and logical volumes, partition raw disk devices, mount iSCSI shares, and much more.<br />
*'''Network interface management''': Any host running the libvirt daemon can be used to manage physical and logical network interfaces. Enumerate existing interfaces, as well as configure (and create) interfaces, bridges, vlans, and bond devices.<br />
*'''Virtual NAT and Route based networking''': Any host running the libvirt daemon can manage and create virtual networks. Libvirt virtual networks use firewall rules to act as a router, providing VMs transparent access to the host machines network.<br />
<br />
== Installation ==<br />
<br />
Because of its daemon/client architecture, libvirt needs only be installed on the machine which will host the virtualized system. Note that the server and client can be the same physical machine.<br />
<br />
=== Server ===<br />
<br />
[[Install]] the {{pkg|libvirt}} package, as well as at least one hypervisor:<br />
<br />
* As of 2015-02-01, {{ic|libvirtd}} '''requires''' {{Pkg|qemu}} to be installed on the system to start (see {{Bug|41888}}). Fortunately, the [http://libvirt.org/drvqemu.html libvirt KVM/QEMU driver] is the primary ''libvirt'' driver and if [[QEMU#Enabling_KVM|KVM is enabled]], fully virtualized, hardware accelerated guests will be available. See the [[QEMU]] article for more informations.<br />
<br />
* Other virtualization backends include [[LXC]], [[VirtualBox]] and [[Xen]]. See their respective page for installation instructions.<br />
:{{Note|The [http://libvirt.org/drvlxc.html libvirt LXC driver] has no dependency on the [[LXC]] userspace tools provided by {{Pkg|lxc}}, therefore there is no need to install it if planning on using this driver.}}<br />
:{{Warning|[[Xen]] support is available but not by default. You need to use the [[ABS]] to modify {{Pkg|libvirt}}'s [[PKGBUILD]] and build it without the {{ic|--without-xen}} option.}}<br />
<br />
Other supported hypervisors are listed [http://libvirt.org/drivers.html here].<br />
<br />
For network connectivity, install: <br />
<br />
* {{Pkg|ebtables}} '''and''' {{Pkg|dnsmasq}} for the [http://wiki.libvirt.org/page/VirtualNetworking#The_default_configuration default] NAT/DHCP networking.<br />
* {{Pkg|bridge-utils}} for bridged networking.<br />
* {{Pkg|openbsd-netcat}} for remote management over [[SSH]].<br />
<br />
=== Client ===<br />
<br />
The client is the user interface that will be used to manage and access the virtual machines.<br />
<br />
* ''virsh'' is a command line program for managing and configuring domains; it is included in the {{Pkg|libvirt}} package.<br />
* {{Pkg|virt-manager}} is a graphical user interface for managing virtual machines.<br />
* {{Pkg|virtviewer}} is a lightweight interface for interacting with the graphical display of virtualized guest OS.<br />
* {{Pkg|gnome-boxes}} is a simple GNOME 3 application to access remote or virtual systems.<br />
* {{AUR|virt-manager-qt5}}<br />
* {{AUR|libvirt-sandbox}} is an application sandbox toolkit.<br />
<br />
A list of libvirt-compatible software can be found [http://libvirt.org/apps.html here].<br />
<br />
== Configuration ==<br />
<br />
For '''''system'''''-level administration (i.e. global settings and image-''volume'' location), libvirt minimally requires [[#Set up authentication|setting up authorization]], and [[#Daemon|starting the daemon]].<br />
<br />
{{Note|For user-'''''session''''' administration, daemon setup and configuration is ''not'' required; authorization, however, is limited to local abilities; the front-end will launch a local instance of the '''libvirtd''' daemon.}}<br />
<br />
=== Set up authentication ===<br />
<br />
From [http://libvirt.org/auth.html#ACL_server_config libvirt: Connection authentication]:<br />
:The libvirt daemon allows the administrator to choose the authentication mechanisms used for client connections on each network socket independently. This is primarily controlled via the libvirt daemon master config file in {{ic|/etc/libvirt/libvirtd.conf}}. Each of the libvirt sockets can have its authentication mechanism configured independently. There is currently a choice of {{ic|none}}, {{ic|polkit}} and {{ic|sasl}}. <br />
<br />
Because {{Pkg|libvirt}} pulls {{Pkg|polkit}} as a dependency during installation, [[#Using polkit|polkit]] is used as the default value for the {{ic|unix_sock_auth}} parameter ([http://libvirt.org/auth.html#ACL_server_polkit source]). [[#Authenticate with file-based permissions|File-based permissions]] remain nevertheless available.<br />
<br />
==== Using polkit ====<br />
{{Note|A system reboot may be required before authenticating with {{ic|polkit}} works correctly.}}<br />
<br />
The ''libvirt'' daemon provides two [[Polkit#Actions|polkit actions]] in {{ic|/usr/share/polkit-1/actions/org.libvirt.unix.policy}}:<br />
* {{ic|org.libvirt.unix.manage}} for full management access (RW daemon socket), and<br />
* {{ic|org.libvirt.unix.monitor}} for monitoring only access (read-only socket).<br />
<br />
The default policy for the RW daemon socket will require to authenticate as an admin. This is akin to [[sudo]] auth, but does not require that the client application ultimately run as root. Default policy will still allow any application to connect to the RO socket.<br />
<br />
Arch defaults to consider anybody in the {{ic|wheel}} group as an administrator: this is defined in {{ic|/etc/polkit-1/rules.d/50-default.rules}} (see [[Polkit#Administrator identities]]). Therefore there is no need to create a new group and rule file '''if your user is a member of the {{ic|wheel}} group''': upon connection to the RW socket (e.g. via {{Pkg|virt-manager}}) you will be prompted for your user's password.<br />
<br />
{{Note|Prompting for a password relies on the presence of an [[Polkit#Authentication_agents|authentication agent]] on the system. Console users may face an issue with the default {{ic|pkttyagent}} agent which may or may not work properly.}}<br />
<br />
{{Tip|If you want to configure passwordless authentication, see [[Polkit#Bypass password prompt]].}}<br />
<br />
As of libvirt 1.2.16 (commit:[http://libvirt.org/git/?p=libvirt.git;a=commit;h=e94979e901517af9fdde358d7b7c92cc055dd50c]), members of the {{ic|libvirt}} group have passwordless access to the RW daemon socket by default. The easiest way to ensure your user has access is to ensure the libvirt group exists and they are a member of it. If you wish to change the group authorized to access the RW daemon socket to be the kvm group, create the following file:<br />
<br />
{{hc|/etc/polkit-1/rules.d/50-libvirt.rules|<nowiki><br />
/* Allow users in kvm group to manage the libvirt<br />
daemon without authentication */<br />
polkit.addRule(function(action, subject) {<br />
if (action.id == "org.libvirt.unix.manage" &&<br />
subject.isInGroup("kvm")) {<br />
return polkit.Result.YES;<br />
}<br />
});</nowiki><br />
}}<br />
<br />
Then [[Users_and_groups#Other_examples_of_user_management|add yourself]] to the {{ic|kvm}} group and relogin. Replace ''kvm'' with any group of your preference just make sure it exists and that your user is a member of it (see [[Users and groups]] for more information).<br />
<br />
Do not forget to relogin for group changes to take effect.<br />
<br />
==== Authenticate with file-based permissions ====<br />
<br />
To define file-based permissions for users in the ''libvirt'' group to manage virtual machines, uncomment and define:<br />
<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777" # set to 0770 to deny non-group libvirt users<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
</nowiki>}}<br />
<br />
While some guides mention changed permissions of certain libvirt directories to ease management, keep in mind permissions are lost on package update. To edit these system directories, root user is expected.<br />
<br />
=== Daemon ===<br />
<br />
[[Start]] both {{ic|libvirtd.service}} and {{ic|virtlogd.service}}. Optionally [[enable]] {{ic|libvirtd.service}}. There is no need to enable {{ic|virtlogd.service}}, since {{ic|libvirtd.service}}, when enabled, also enables the {{ic|virtlogd.socket}} and {{ic|virtlockd.socket}} [[Systemd#Using_units|units]].<br />
<br />
=== Unencrypt TCP/IP sockets ===<br />
<br />
{{Warning|This method is used to help remote domain, connection speed for trusted networks. This is the least secure connection method. This should ''only'' be used for testing or use over a secure, private, and trusted network. SASL is not enabled here, so all TCP traffic is ''cleartext''. For real world ''always'' use enable SASL.}}<br />
<br />
Edit {{ic|/etc/libvirt/libvirtd.conf}}:<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
listen_tls = 0<br />
listen_tcp = 1<br />
auth_tcp=none<br />
</nowiki>}}<br />
<br />
It is also necessary to start the server in listening mode by editing {{ic|/etc/conf.d/libvirtd}}:<br />
<br />
{{hc|/etc/conf.d/libvirtd|2=LIBVIRTD_ARGS="--listen"}}<br />
<br />
== Test ==<br />
<br />
To test if libvirt is working properly on a ''system'' level:<br />
<br />
$ virsh -c qemu:///system<br />
<br />
To test if libvirt is working properly for a user-''session'':<br />
<br />
$ virsh -c qemu:///session<br />
<br />
== Management ==<br />
<br />
Libvirt management is done mostly with three tools: {{Pkg|virt-manager}} (GUI), {{ic|virsh}}, and {{ic|guestfish}} (which is part of {{AUR|libguestfs}}).<br />
<br />
=== virsh ===<br />
<br />
The virsh program is for managing guest ''domains'' (virtual machines) and works well for scripting, virtualization administration. Though most virsh commands require root privileges to run due to the communication channels used to talk to the hypervisor, typical management, creation, and running of domains (like that done with VirtualBox) can be done as a regular user.<br />
<br />
Virsh includes an interactive terminal that can be entered if no commands are passed (options are allowed though): {{ic|virsh}}. The interactive terminal has support for tab completion.<br />
<br />
From the command line:<br />
<br />
$ virsh [option] <command> [argument]...<br />
<br />
From the interactive terminal:<br />
<br />
virsh # <command> [argument]...<br />
<br />
Help is available:<br />
<br />
$ virsh help [option*] or [group-keyword*]<br />
<br />
=== Storage pools ===<br />
<br />
A pool is a location where storage ''volumes'' can be kept. What libvirt defines as ''volumes'' others may define as "virtual disks" or "virtual machine images". Pool locations may be a directory, a network filesystem, or partition (this includes a [[LVM]]). Pools can be toggled active or inactive and allocated for space.<br />
<br />
On the ''system''-level, {{ic|/var/lib/libvirt/images/}} will be activated by default; on a user-''session'', {{ic|virt-manager}} creates {{ic|$HOME/VirtualMachines}}.<br />
<br />
Print active and inactive storage pools:<br />
<br />
$ virsh pool-list --all<br />
<br />
==== Create a new pool using virsh ====<br />
<br />
If wanted to ''add'' a storage pool, here are examples of the command form, adding a directory, and adding a LVM volume:<br />
<br />
$ virsh pool-define-as name type [source-host] [source-path] [source-dev] [source-name] [<target>] [--source-format format]<br />
$ virsh pool-define-as ''poolname'' dir - - - - /home/''username''/.local/libvirt/images<br />
$ virsh pool-define-as ''poolname'' fs - - ''/dev/vg0/images'' - ''mntpoint''<br />
<br />
The above command defines the information for the pool, to build it:<br />
<br />
$ virsh pool-build ''poolname''<br />
$ virsh pool-start ''poolname''<br />
$ virsh pool-autostart ''poolname''<br />
<br />
To remove it:<br />
<br />
$ virsh pool-undefine ''poolname''<br />
<br />
{{Tip|For LVM storage pools:<br />
* It is a good practice to dedicate a volume group to the storage pool only. <br />
* Choose a LVM volume group that differs from the pool name, otherwise when the storage pool is deleted the LVM group will be too.<br />
}}<br />
<br />
==== Create a new pool using virt-manager ====<br />
<br />
First, connect to a hypervisor (e.g. QEMU/KVM ''system'', or user-''session''). Then, right-click on a connection and select ''Details''; select the ''Storage'' tab, push the ''+'' button on the lower-left, and follow the wizard.<br />
<br />
=== Storage volumes ===<br />
<br />
Once the pool has been created, volumes can be created inside the pool. ''If building a new domain (virtual machine), this step can be skipped as a volume can be created in the domain creation process.''<br />
<br />
==== Create a new volume with virsh ====<br />
<br />
Create volume, list volumes, resize, and delete:<br />
$ virsh vol-create-as ''poolname'' ''volumename'' 10GiB --format aw|bochs|raw|qcow|qcow2|vmdk<br />
$ virsh vol-upload --pool ''poolname'' ''volumename'' ''volumepath''<br />
$ virsh vol-list ''poolname''<br />
$ virsh vol-resize --pool ''poolname'' ''volumename'' 12GiB<br />
$ virsh vol-delete --pool ''poolname'' ''volumename''<br />
$ virsh vol-dumpxml --pool ''poolname'' ''volumename'' # for details.<br />
<br />
==== virt-manager backing store type bug ====<br />
<br />
On newer versions of {{ic|virt-manager}} you can now specify a backing store to use when creating a new disk. This is very useful, in that you can have new domains be based on base images saving you both time and disk space when provisioning new virtual systems. There is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1235406) in the current version of {{ic|virt-manager}} which causes {{ic|virt-manager}} to choose the wrong type of the backing image in the case where the backing image is a {{ic|qcow2}} type. In this case, it will errantly pick the backing type as {{ic|raw}}. This will cause the new image to be unable to read from the backing store, and effectively remove the utility of having a backing store at all.<br />
<br />
There is a workaround for this issue. {{ic|qemu-img}} has long been able to do this operation directly. If you wish to have a backing store for your new domain before this bug is fixed, you may use the following command.<br />
<br />
$ qemu-img create -f qcow2 -o backing_file=<path to backing image>,backing_fmt=qcow2 <disk name> <disk size><br />
<br />
Then you can use this image as the base for your new domain and it will use the backing store as a COW volume saving you time and disk space.<br />
<br />
=== Domains ===<br />
<br />
Virtual machines are called ''domains''. If working from the command line, use {{ic|virsh}} to list, create, pause, shutdown domains, etc. {{ic|virt-viewer}} can be used to view domains started with {{ic|virsh}}. Creation of domains is typically done either graphically with {{ic|virt-manager}} or with {{ic|virt-install}} (a command line program that is part of the {{pkg|virt-manager}} package).<br />
<br />
Creating a new domain typically involves using some installation media, such as an {{ic|.iso}} from the storage pool or an optical drive.<br />
<br />
Print active and inactive domains:<br />
<br />
# virsh list --all<br />
<br />
{{note|[[SELinux]] has a built-in exemption for libvirt that allows volumes in {{ic|/var/lib/libvirt/images/}} to be accessed. If using SELinux and there are issues with the volumes, ensure that volumes are in that directory, or ensure that other storage pools are correctly labeled.}}<br />
<br />
==== Create a new domain using virt-install ====<br />
<br />
For an extremely detailed domain (virtual machine) setup, it is easier to [[#Create a new domain using virt-manager]]. However, basics can easily be done with {{ic|virt-install}} and still run quite well. Minimum specifications are {{ic|--name}}, {{ic|--memory}}, guest storage ({{ic|--disk}}, {{ic|--filesystem}}, or {{ic|--nodisks}}), and an install method (generally an {{ic|.iso}} or CD). See {{ic|man virt-install}} for more details and information about unlisted options.<br />
<br />
Arch Linux install (two GiB, qcow2 format volume create; user-networking):<br />
<br />
$ virt-install \<br />
--name arch-linux_testing \<br />
--memory 1024 \ <br />
--vcpus=2,maxvcpus=4 \<br />
--cpu host \<br />
--cdrom $HOME/Downloads/arch-linux_install.iso \<br />
--disk size=2,format=qcow2 \<br />
--network user \<br />
--virt-type kvm<br />
<br />
Fedora testing (Xen hypervisor, non-default pool, do not originally view):<br />
<br />
$ virt-install \<br />
--connect xen:/// \<br />
--name fedora-testing \<br />
--memory 2048 \<br />
--vcpus=2 \<br />
--cpu=host \<br />
--cdrom /tmp/fedora20_x84-64.iso \<br />
--os-type=linux --os-variant=fedora20 \<br />
--disk pool=testing,size=4 \<br />
--network bridge=br0 \<br />
--graphics=vnc \<br />
--noautoconsole<br />
$ virt-viewer --connect xen:/// fedora-testing<br />
<br />
Windows:<br />
<br />
$ virt-install \<br />
--name=windows7 \<br />
--memory 2048 \<br />
--cdrom /dev/sr0 \<br />
--os-variant=win7 \<br />
--disk /mnt/storage/domains/windows7.qcow2,size=20GiB \<br />
--network network=vm-net \<br />
--graphics spice<br />
<br />
{{Tip|Run {{ic|1=osinfo-query --fields=name,short-id,version os}} to get argument for {{ic|--os-variant}}; this will help define some specifications for the domain. However, {{ic|--memory}} and {{ic|--disk}} will need to be entered; one can look within the appropriate {{ic|/usr/share/libosinfo/db/oses/''os''.xml}} if needing these specifications. After installing, it will likely be preferable to install the [http://www.spice-space.org/download.html Spice Guest Tools] that include the [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/form-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Para_virtualized_drivers-Mounting_the_image_with_virt_manager.html VirtIO drivers]. For a Windows VirtIO network driver there is also {{Aur|virtio-win}}. These drivers are referenced by a {{ic|1=<model type='virtio' />}} in the guest's {{ic|.xml}} configuration section for the device. A bit more information can also be found on the [[QEMU#Preparing_a_Windows_guest|QEMU article]].}}<br />
<br />
Import existing volume:<br />
<br />
$ virt-install \<br />
--name demo \<br />
--memory 512 \<br />
--disk /home/user/VMs/mydisk.img \<br />
--import<br />
<br />
==== Create a new domain using virt-manager ====<br />
<br />
First, connect to the hypervisor (e.g. QEMU/KVM ''system'' or user ''session''), right click on a connection and select ''New'', and follow the wizard.<br />
<br />
* On the ''fourth step'', de-selecting ''Allocate entire disk now'' will make setup quicker and can save disk space in the interum; ''however'', it may cause volume fragmentation over time.<br />
* On the ''fifth step'', open ''Advanced options'' and make sure that ''Virt Type'' is set to ''kvm'' (this is usually the preferred method). If additional hardware setup is required, select the ''Customize configuration before install'' option.<br />
<br />
==== Manage a domain ====<br />
<br />
Start a domain:<br />
<br />
$ virsh start ''domain''<br />
$ virt-viewer --connect qemu:///session ''domain''<br />
<br />
Gracefully attempt to shutdown a domain; force off a domain:<br />
<br />
$ virsh shutdown ''domain''<br />
$ virsh destroy ''domain''<br />
<br />
Autostart domain on libvirtd start:<br />
<br />
$ virsh autostart ''domain''<br />
$ virsh autostart ''domain'' --disable<br />
<br />
Shutdown domain on host shutdown:<br />
<br />
: Running domains can be automatically suspended/shutdown at host shutdown using the {{ic|libvirt-guests.service}} systemd service. This same service will resume/startup the suspended/shutdown domain automatically at host startup. Read {{ic|/etc/conf.d/libvirt-guests}} for service options.<br />
<br />
Edit a domain's XML configuration:<br />
<br />
$ virsh edit ''domain''<br />
<br />
{{note|Virtual Machines started directly by QEMU are not managable by libvirt tools.}}<br />
<br />
=== Networks ===<br />
<br />
A [https://jamielinux.com/docs/libvirt-networking-handbook/ decent overview of libvirt networking].<br />
<br />
By default, when the {{ic|libvird}} systemd service is started, a NAT bridge is created called ''default'' to allow external network connectivity (warning see: [[#"default" network bug]]). For other network connectivity needs, four network types exist that can be created to connect a domain to:<br />
<br />
* bridge — a virtual device; shares data directly with a physical interface. Use this if the host has ''static'' networking, it does not need to connect other domains, the domain requires full inbound and outbound trafficing, and the domain is running on a ''system''-level. See [[Network bridge]] on how to add a bridge additional to the default one. After creation, it needs to be specified in the respective guest's {{ic|.xml}} configuration file. <br />
* network — a virtual network; has ability to share with other domains. Use a virtual network if the host has ''dynamic'' networking (e.g. NetworkManager), or using wireless.<br />
* macvtap — connect directly to a host physical interface.<br />
* user — local ability networking. Use this only for a user ''session''.<br />
<br />
{{ic|virsh}} has the ability to create networking with numerous options for most users, however, it is easier to create network connectivity with a graphic user interface (like {{ic|virt-manager}}), or to do so on [[#Create a new domain using virt-install|creation with virt-install]].<br />
<br />
{{note|libvirt handles DHCP and DNS with {{pkg|dnsmasq}}, launching a separate instance for every virtual network. It also adds iptables rules for proper routing, and enables the {{ic|ip_forward}} kernel parameter.}}<br />
<br />
=== Snapshots ===<br />
<br />
Snapshots take the disk, memory, and device state of a domain at a point-of-time, and save it for future use. They have many uses, from saving a "clean" copy of an OS image to saving a domain's state before a potentially destructive operation. Snapshots are identified with a unique name.<br />
<br />
Snapshots are saved within the volume itself and the volume must be the format: qcow2 or raw. Snapshots use deltas so they have the potentiality to not take much space.<br />
<br />
==== Create a snapshot ====<br />
<br />
{{Out of date|Some of this data appears to be dated.}}<br />
<br />
Once a snapshot is taken it is saved as a new block device and the original snapshot is taken offline. Snapshots can be chosen from and also merged into another (even without shutting down the domain).<br />
<br />
Print a running domain's volumes (running domains can be printed with {{ic|virsh list}}):<br />
<br />
{{hc|# virsh domblklist ''domain''|<nowiki><br />
Target Source<br />
------------------------------------------------<br />
vda /vms/domain.img<br />
</nowiki>}}<br />
<br />
To see a volume's physical properties:<br />
<br />
{{hc|# qemu-img info /vms/domain.img|<nowiki><br />
image: /vms/domain.img<br />
file format: qcow2<br />
virtual size: 50G (53687091200 bytes)<br />
disk size: 2.1G<br />
cluster_size: 65536<br />
</nowiki>}}<br />
<br />
Create a disk-only snapshot (the option {{ic|--atomic}} will prevent the volume from being modified if snapshot creation fails):<br />
<br />
# virsh snapshot-create-as ''domain'' snapshot1 --disk-only --atomic<br />
<br />
List snapshots:<br />
<br />
{{hc|# virsh snapshot-list ''domain''|<nowiki><br />
Name Creation Time State<br />
------------------------------------------------------------<br />
snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot<br />
</nowiki>}}<br />
<br />
One can they copy the original image with {{ic|1=cp --sparse=true}} or {{ic|rsync -S}} and then merge the the original back into snapshot:<br />
<br />
# virsh blockpull --domain ''domain'' --path /vms/''domain''.snapshot1<br />
<br />
{{ic|domain.snapshot1}} becomes a new volume. After this is done the original volume ({{ic|domain.img}} and snapshot metadata can be deleted. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development (including {{ic|snapshot-revert feature}}, scheduled to be released sometime next year.<br />
<br />
=== Other management ===<br />
<br />
Connect to non-default hypervisor:<br />
<br />
$ virsh --connect xen:///<br />
virsh # uri<br />
xen:///<br />
<br />
Connect to the QEMU hypervisor over SSH; and the same with logging:<br />
<br />
$ virsh --connect qemu+ssh://''username''@''host''/system<br />
$ LIBVIRT_DEBUG=1 virsh --connect qemu+ssh://''username''@''host''/system<br />
<br />
Connect a graphic console over SSH:<br />
<br />
$ virt-viewer --connect qemu+ssh://''username''@''host''/system ''domain''<br />
$ virt-manager --connect qemu+ssh://''username''@''host''/system ''domain''<br />
<br />
{{Note|If you are having problems connecting to a remote RHEL server (or anything other than Arch, really), try the two workarounds mentioned in {{bug|30748}} and {{bug|22068}}.}}<br />
<br />
Connect to the VirtualBox hypervisor (''VirtualBox support in libvirt is not stable yet and may cause libvirtd to crash''):<br />
<br />
$ virsh --connect vbox:///system<br />
<br />
Network configurations:<br />
<br />
$ virsh -c qemu:///system net-list --all<br />
$ virsh -c qemu:///system net-dumpxml default<br />
<br />
== Python connectivity code ==<br />
<br />
The {{Pkg|libvirt-python}} package provides a {{Pkg|python2}} API in {{ic|/usr/lib/python2.7/site-packages/libvirt.py}}.<br />
<br />
General examples are given in {{ic|/usr/share/doc/libvirt-python-''your_libvirt_version''/examples/}}<br />
<br />
Unofficial example using {{Pkg|qemu}} and {{Pkg|openssh}}:<br />
<br />
#! /usr/bin/env python2<br />
# -*- coding: utf-8 -*-<br />
import socket<br />
import sys<br />
import libvirt<br />
if (__name__ == "__main__"):<br />
<nowiki>conn = libvirt.open("qemu+ssh://xxx/system")</nowiki><br />
print "Trying to find node on xxx"<br />
domains = conn.listDomainsID()<br />
for domainID in domains:<br />
domConnect = conn.lookupByID(domainID)<br />
if domConnect.name() == 'xxx-node':<br />
print "Found shared node on xxx with ID " + str(domainID)<br />
domServ = domConnect<br />
break<br />
<br />
== UEFI Support ==<br />
<br />
For UEFI support you need to install the OVMF packages [https://www.kraxel.org/repos/jenkins/edk2/ Gerd Hoffman's repository].<br />
<br />
Download and extract edk2.git-ovmf-x64 to /usr.<br />
<br />
[[Install]] {{Pkg|rpmextract}}.<br />
<br />
{{bc|# rpmextract.sh edk2.git-ovmf-x64-0-20150223.b877.ga8577b3.noarch.rpm<br />
# cp -R ./usr/share/* /usr/share}}<br />
<br />
Then you will have to source the OVMF files in {{ic|1=/etc/libvirt/qemu.conf}}.<br />
Set the following details:<br />
nvram = [<br />
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",<br />
]<br />
Then restart libvirtd:<br />
systemctl restart libvirtd.service<br />
<br />
== See also ==<br />
<br />
* [http://libvirt.org/drvqemu.html Official libvirt web site]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html Red Hat Virtualization Deployment and Administration Guide]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/index.html Red Hat Virtualization Tuning and Optimization Guide]<br />
* [http://docs.slackware.com/howtos:general_admin:kvm_libvirt Slackware KVM and libvirt]<br />
* [http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm IBM KVM]<br />
* [https://jamielinux.com/docs/libvirt-networking-handbook/ libvirt Networking Handbook]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Libvirt&diff=428168Libvirt2016-03-27T22:21:56Z<p>IronOrion: Added a guide to setting up UEFI/OVMF taking information from the article PCI passthrough via OVMF for virt-manager</p>
<hr />
<div>{{DISPLAYTITLE:libvirt}}<br />
[[Category:Virtualization]]<br />
[[ja:libvirt]]<br />
[[zh-CN:Libvirt]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|:PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management. These software pieces include a long term stable C API, a daemon (libvirtd), and a command line utility (virsh). A primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors, such as the [[QEMU|KVM/QEMU]], [[Xen]], [[LXC]], [http://openvz.org OpenVZ] or [[VirtualBox]] [[:Category:Hypervisors|hypervisors]] ([http://libvirt.org/drivers.html among others]).<br />
<br />
Some of the major libvirt features are:<br />
*'''VM management''': Various domain lifecycle operations such as start, stop, pause, save, restore, and migrate. Hotplug operations for many device types including disk and network interfaces, memory, and cpus.<br />
*'''Remote machine support''': All libvirt functionality is accessible on any machine running the libvirt daemon, including remote machines. A variety of network transports are supported for connecting remotely, with the simplest being SSH, which requires no extra explicit configuration.<br />
*'''Storage management''': Any host running the libvirt daemon can be used to manage various types of storage: create file images of various formats (qcow2, vmdk, raw, ...), mount NFS shares, enumerate existing LVM volume groups, create new LVM volume groups and logical volumes, partition raw disk devices, mount iSCSI shares, and much more.<br />
*'''Network interface management''': Any host running the libvirt daemon can be used to manage physical and logical network interfaces. Enumerate existing interfaces, as well as configure (and create) interfaces, bridges, vlans, and bond devices.<br />
*'''Virtual NAT and Route based networking''': Any host running the libvirt daemon can manage and create virtual networks. Libvirt virtual networks use firewall rules to act as a router, providing VMs transparent access to the host machines network.<br />
<br />
== Installation ==<br />
<br />
Because of its daemon/client architecture, libvirt needs only be installed on the machine which will host the virtualized system. Note that the server and client can be the same physical machine.<br />
<br />
=== Server ===<br />
<br />
[[Install]] the {{pkg|libvirt}} package, as well as at least one hypervisor:<br />
<br />
* As of 2015-02-01, {{ic|libvirtd}} '''requires''' {{Pkg|qemu}} to be installed on the system to start (see {{Bug|41888}}). Fortunately, the [http://libvirt.org/drvqemu.html libvirt KVM/QEMU driver] is the primary ''libvirt'' driver and if [[QEMU#Enabling_KVM|KVM is enabled]], fully virtualized, hardware accelerated guests will be available. See the [[QEMU]] article for more informations.<br />
<br />
* Other virtualization backends include [[LXC]], [[VirtualBox]] and [[Xen]]. See their respective page for installation instructions.<br />
:{{Note|The [http://libvirt.org/drvlxc.html libvirt LXC driver] has no dependency on the [[LXC]] userspace tools provided by {{Pkg|lxc}}, therefore there is no need to install it if planning on using this driver.}}<br />
:{{Warning|[[Xen]] support is available but not by default. You need to use the [[ABS]] to modify {{Pkg|libvirt}}'s [[PKGBUILD]] and build it without the {{ic|--without-xen}} option.}}<br />
<br />
Other supported hypervisors are listed [http://libvirt.org/drivers.html here].<br />
<br />
For network connectivity, install: <br />
<br />
* {{Pkg|ebtables}} '''and''' {{Pkg|dnsmasq}} for the [http://wiki.libvirt.org/page/VirtualNetworking#The_default_configuration default] NAT/DHCP networking.<br />
* {{Pkg|bridge-utils}} for bridged networking.<br />
* {{Pkg|openbsd-netcat}} for remote management over [[SSH]].<br />
<br />
=== Client ===<br />
<br />
The client is the user interface that will be used to manage and access the virtual machines.<br />
<br />
* ''virsh'' is a command line program for managing and configuring domains; it is included in the {{Pkg|libvirt}} package.<br />
* {{Pkg|virt-manager}} is a graphical user interface for managing virtual machines.<br />
* {{Pkg|virtviewer}} is a lightweight interface for interacting with the graphical display of virtualized guest OS.<br />
* {{Pkg|gnome-boxes}} is a simple GNOME 3 application to access remote or virtual systems.<br />
* {{AUR|virt-manager-qt5}}<br />
* {{AUR|libvirt-sandbox}} is an application sandbox toolkit.<br />
<br />
A list of libvirt-compatible software can be found [http://libvirt.org/apps.html here].<br />
<br />
== Configuration ==<br />
<br />
For '''''system'''''-level administration (i.e. global settings and image-''volume'' location), libvirt minimally requires [[#Set up authentication|setting up authorization]], and [[#Daemon|starting the daemon]].<br />
<br />
{{Note|For user-'''''session''''' administration, daemon setup and configuration is ''not'' required; authorization, however, is limited to local abilities; the front-end will launch a local instance of the '''libvirtd''' daemon.}}<br />
<br />
=== Set up authentication ===<br />
<br />
From [http://libvirt.org/auth.html#ACL_server_config libvirt: Connection authentication]:<br />
:The libvirt daemon allows the administrator to choose the authentication mechanisms used for client connections on each network socket independently. This is primarily controlled via the libvirt daemon master config file in {{ic|/etc/libvirt/libvirtd.conf}}. Each of the libvirt sockets can have its authentication mechanism configured independently. There is currently a choice of {{ic|none}}, {{ic|polkit}} and {{ic|sasl}}. <br />
<br />
Because {{Pkg|libvirt}} pulls {{Pkg|polkit}} as a dependency during installation, [[#Using polkit|polkit]] is used as the default value for the {{ic|unix_sock_auth}} parameter ([http://libvirt.org/auth.html#ACL_server_polkit source]). [[#Authenticate with file-based permissions|File-based permissions]] remain nevertheless available.<br />
<br />
==== Using polkit ====<br />
{{Note|A system reboot may be required before authenticating with {{ic|polkit}} works correctly.}}<br />
<br />
The ''libvirt'' daemon provides two [[Polkit#Actions|polkit actions]] in {{ic|/usr/share/polkit-1/actions/org.libvirt.unix.policy}}:<br />
* {{ic|org.libvirt.unix.manage}} for full management access (RW daemon socket), and<br />
* {{ic|org.libvirt.unix.monitor}} for monitoring only access (read-only socket).<br />
<br />
The default policy for the RW daemon socket will require to authenticate as an admin. This is akin to [[sudo]] auth, but does not require that the client application ultimately run as root. Default policy will still allow any application to connect to the RO socket.<br />
<br />
Arch defaults to consider anybody in the {{ic|wheel}} group as an administrator: this is defined in {{ic|/etc/polkit-1/rules.d/50-default.rules}} (see [[Polkit#Administrator identities]]). Therefore there is no need to create a new group and rule file '''if your user is a member of the {{ic|wheel}} group''': upon connection to the RW socket (e.g. via {{Pkg|virt-manager}}) you will be prompted for your user's password.<br />
<br />
{{Note|Prompting for a password relies on the presence of an [[Polkit#Authentication_agents|authentication agent]] on the system. Console users may face an issue with the default {{ic|pkttyagent}} agent which may or may not work properly.}}<br />
<br />
{{Tip|If you want to configure passwordless authentication, see [[Polkit#Bypass password prompt]].}}<br />
<br />
As of libvirt 1.2.16 (commit:[http://libvirt.org/git/?p=libvirt.git;a=commit;h=e94979e901517af9fdde358d7b7c92cc055dd50c]), members of the {{ic|libvirt}} group have passwordless access to the RW daemon socket by default. The easiest way to ensure your user has access is to ensure the libvirt group exists and they are a member of it. If you wish to change the group authorized to access the RW daemon socket to be the kvm group, create the following file:<br />
<br />
{{hc|/etc/polkit-1/rules.d/50-libvirt.rules|<nowiki><br />
/* Allow users in kvm group to manage the libvirt<br />
daemon without authentication */<br />
polkit.addRule(function(action, subject) {<br />
if (action.id == "org.libvirt.unix.manage" &&<br />
subject.isInGroup("kvm")) {<br />
return polkit.Result.YES;<br />
}<br />
});</nowiki><br />
}}<br />
<br />
Then [[Users_and_groups#Other_examples_of_user_management|add yourself]] to the {{ic|kvm}} group and relogin. Replace ''kvm'' with any group of your preference just make sure it exists and that your user is a member of it (see [[Users and groups]] for more information).<br />
<br />
Do not forget to relogin for group changes to take effect.<br />
<br />
==== Authenticate with file-based permissions ====<br />
<br />
To define file-based permissions for users in the ''libvirt'' group to manage virtual machines, uncomment and define:<br />
<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777" # set to 0770 to deny non-group libvirt users<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
</nowiki>}}<br />
<br />
While some guides mention changed permissions of certain libvirt directories to ease management, keep in mind permissions are lost on package update. To edit these system directories, root user is expected.<br />
<br />
=== Daemon ===<br />
<br />
[[Start]] both {{ic|libvirtd.service}} and {{ic|virtlogd.service}}. Optionally [[enable]] {{ic|libvirtd.service}}. There is no need to enable {{ic|virtlogd.service}}, since {{ic|libvirtd.service}}, when enabled, also enables the {{ic|virtlogd.socket}} and {{ic|virtlockd.socket}} [[Systemd#Using_units|units]].<br />
<br />
=== Unencrypt TCP/IP sockets ===<br />
<br />
{{Warning|This method is used to help remote domain, connection speed for trusted networks. This is the least secure connection method. This should ''only'' be used for testing or use over a secure, private, and trusted network. SASL is not enabled here, so all TCP traffic is ''cleartext''. For real world ''always'' use enable SASL.}}<br />
<br />
Edit {{ic|/etc/libvirt/libvirtd.conf}}:<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
listen_tls = 0<br />
listen_tcp = 1<br />
auth_tcp=none<br />
</nowiki>}}<br />
<br />
It is also necessary to start the server in listening mode by editing {{ic|/etc/conf.d/libvirtd}}:<br />
<br />
{{hc|/etc/conf.d/libvirtd|2=LIBVIRTD_ARGS="--listen"}}<br />
<br />
== Test ==<br />
<br />
To test if libvirt is working properly on a ''system'' level:<br />
<br />
$ virsh -c qemu:///system<br />
<br />
To test if libvirt is working properly for a user-''session'':<br />
<br />
$ virsh -c qemu:///session<br />
<br />
== Management ==<br />
<br />
Libvirt management is done mostly with three tools: {{Pkg|virt-manager}} (GUI), {{ic|virsh}}, and {{ic|guestfish}} (which is part of {{AUR|libguestfs}}).<br />
<br />
=== virsh ===<br />
<br />
The virsh program is for managing guest ''domains'' (virtual machines) and works well for scripting, virtualization administration. Though most virsh commands require root privileges to run due to the communication channels used to talk to the hypervisor, typical management, creation, and running of domains (like that done with VirtualBox) can be done as a regular user.<br />
<br />
Virsh includes an interactive terminal that can be entered if no commands are passed (options are allowed though): {{ic|virsh}}. The interactive terminal has support for tab completion.<br />
<br />
From the command line:<br />
<br />
$ virsh [option] <command> [argument]...<br />
<br />
From the interactive terminal:<br />
<br />
virsh # <command> [argument]...<br />
<br />
Help is available:<br />
<br />
$ virsh help [option*] or [group-keyword*]<br />
<br />
=== Storage pools ===<br />
<br />
A pool is a location where storage ''volumes'' can be kept. What libvirt defines as ''volumes'' others may define as "virtual disks" or "virtual machine images". Pool locations may be a directory, a network filesystem, or partition (this includes a [[LVM]]). Pools can be toggled active or inactive and allocated for space.<br />
<br />
On the ''system''-level, {{ic|/var/lib/libvirt/images/}} will be activated by default; on a user-''session'', {{ic|virt-manager}} creates {{ic|$HOME/VirtualMachines}}.<br />
<br />
Print active and inactive storage pools:<br />
<br />
$ virsh pool-list --all<br />
<br />
==== Create a new pool using virsh ====<br />
<br />
If wanted to ''add'' a storage pool, here are examples of the command form, adding a directory, and adding a LVM volume:<br />
<br />
$ virsh pool-define-as name type [source-host] [source-path] [source-dev] [source-name] [<target>] [--source-format format]<br />
$ virsh pool-define-as ''poolname'' dir - - - - /home/''username''/.local/libvirt/images<br />
$ virsh pool-define-as ''poolname'' fs - - ''/dev/vg0/images'' - ''mntpoint''<br />
<br />
The above command defines the information for the pool, to build it:<br />
<br />
$ virsh pool-build ''poolname''<br />
$ virsh pool-start ''poolname''<br />
$ virsh pool-autostart ''poolname''<br />
<br />
To remove it:<br />
<br />
$ virsh pool-undefine ''poolname''<br />
<br />
{{Tip|For LVM storage pools:<br />
* It is a good practice to dedicate a volume group to the storage pool only. <br />
* Choose a LVM volume group that differs from the pool name, otherwise when the storage pool is deleted the LVM group will be too.<br />
}}<br />
<br />
==== Create a new pool using virt-manager ====<br />
<br />
First, connect to a hypervisor (e.g. QEMU/KVM ''system'', or user-''session''). Then, right-click on a connection and select ''Details''; select the ''Storage'' tab, push the ''+'' button on the lower-left, and follow the wizard.<br />
<br />
=== Storage volumes ===<br />
<br />
Once the pool has been created, volumes can be created inside the pool. ''If building a new domain (virtual machine), this step can be skipped as a volume can be created in the domain creation process.''<br />
<br />
==== Create a new volume with virsh ====<br />
<br />
Create volume, list volumes, resize, and delete:<br />
$ virsh vol-create-as ''poolname'' ''volumename'' 10GiB --format aw|bochs|raw|qcow|qcow2|vmdk<br />
$ virsh vol-upload --pool ''poolname'' ''volumename'' ''volumepath''<br />
$ virsh vol-list ''poolname''<br />
$ virsh vol-resize --pool ''poolname'' ''volumename'' 12GiB<br />
$ virsh vol-delete --pool ''poolname'' ''volumename''<br />
$ virsh vol-dumpxml --pool ''poolname'' ''volumename'' # for details.<br />
<br />
==== virt-manager backing store type bug ====<br />
<br />
On newer versions of {{ic|virt-manager}} you can now specify a backing store to use when creating a new disk. This is very useful, in that you can have new domains be based on base images saving you both time and disk space when provisioning new virtual systems. There is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1235406) in the current version of {{ic|virt-manager}} which causes {{ic|virt-manager}} to choose the wrong type of the backing image in the case where the backing image is a {{ic|qcow2}} type. In this case, it will errantly pick the backing type as {{ic|raw}}. This will cause the new image to be unable to read from the backing store, and effectively remove the utility of having a backing store at all.<br />
<br />
There is a workaround for this issue. {{ic|qemu-img}} has long been able to do this operation directly. If you wish to have a backing store for your new domain before this bug is fixed, you may use the following command.<br />
<br />
$ qemu-img create -f qcow2 -o backing_file=<path to backing image>,backing_fmt=qcow2 <disk name> <disk size><br />
<br />
Then you can use this image as the base for your new domain and it will use the backing store as a COW volume saving you time and disk space.<br />
<br />
=== Domains ===<br />
<br />
Virtual machines are called ''domains''. If working from the command line, use {{ic|virsh}} to list, create, pause, shutdown domains, etc. {{ic|virt-viewer}} can be used to view domains started with {{ic|virsh}}. Creation of domains is typically done either graphically with {{ic|virt-manager}} or with {{ic|virt-install}} (a command line program that is part of the {{pkg|virt-manager}} package).<br />
<br />
Creating a new domain typically involves using some installation media, such as an {{ic|.iso}} from the storage pool or an optical drive.<br />
<br />
Print active and inactive domains:<br />
<br />
# virsh list --all<br />
<br />
{{note|[[SELinux]] has a built-in exemption for libvirt that allows volumes in {{ic|/var/lib/libvirt/images/}} to be accessed. If using SELinux and there are issues with the volumes, ensure that volumes are in that directory, or ensure that other storage pools are correctly labeled.}}<br />
<br />
==== Create a new domain using virt-install ====<br />
<br />
For an extremely detailed domain (virtual machine) setup, it is easier to [[#Create a new domain using virt-manager]]. However, basics can easily be done with {{ic|virt-install}} and still run quite well. Minimum specifications are {{ic|--name}}, {{ic|--memory}}, guest storage ({{ic|--disk}}, {{ic|--filesystem}}, or {{ic|--nodisks}}), and an install method (generally an {{ic|.iso}} or CD). See {{ic|man virt-install}} for more details and information about unlisted options.<br />
<br />
Arch Linux install (two GiB, qcow2 format volume create; user-networking):<br />
<br />
$ virt-install \<br />
--name arch-linux_testing \<br />
--memory 1024 \ <br />
--vcpus=2,maxvcpus=4 \<br />
--cpu host \<br />
--cdrom $HOME/Downloads/arch-linux_install.iso \<br />
--disk size=2,format=qcow2 \<br />
--network user \<br />
--virt-type kvm<br />
<br />
Fedora testing (Xen hypervisor, non-default pool, do not originally view):<br />
<br />
$ virt-install \<br />
--connect xen:/// \<br />
--name fedora-testing \<br />
--memory 2048 \<br />
--vcpus=2 \<br />
--cpu=host \<br />
--cdrom /tmp/fedora20_x84-64.iso \<br />
--os-type=linux --os-variant=fedora20 \<br />
--disk pool=testing,size=4 \<br />
--network bridge=br0 \<br />
--graphics=vnc \<br />
--noautoconsole<br />
$ virt-viewer --connect xen:/// fedora-testing<br />
<br />
Windows:<br />
<br />
$ virt-install \<br />
--name=windows7 \<br />
--memory 2048 \<br />
--cdrom /dev/sr0 \<br />
--os-variant=win7 \<br />
--disk /mnt/storage/domains/windows7.qcow2,size=20GiB \<br />
--network network=vm-net \<br />
--graphics spice<br />
<br />
{{Tip|Run {{ic|1=osinfo-query --fields=name,short-id,version os}} to get argument for {{ic|--os-variant}}; this will help define some specifications for the domain. However, {{ic|--memory}} and {{ic|--disk}} will need to be entered; one can look within the appropriate {{ic|/usr/share/libosinfo/db/oses/''os''.xml}} if needing these specifications. After installing, it will likely be preferable to install the [http://www.spice-space.org/download.html Spice Guest Tools] that include the [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/form-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Para_virtualized_drivers-Mounting_the_image_with_virt_manager.html VirtIO drivers]. For a Windows VirtIO network driver there is also {{Aur|virtio-win}}. These drivers are referenced by a {{ic|1=<model type='virtio' />}} in the guest's {{ic|.xml}} configuration section for the device. A bit more information can also be found on the [[QEMU#Preparing_a_Windows_guest|QEMU article]].}}<br />
<br />
Import existing volume:<br />
<br />
$ virt-install \<br />
--name demo \<br />
--memory 512 \<br />
--disk /home/user/VMs/mydisk.img \<br />
--import<br />
<br />
==== Create a new domain using virt-manager ====<br />
<br />
First, connect to the hypervisor (e.g. QEMU/KVM ''system'' or user ''session''), right click on a connection and select ''New'', and follow the wizard.<br />
<br />
* On the ''fourth step'', de-selecting ''Allocate entire disk now'' will make setup quicker and can save disk space in the interum; ''however'', it may cause volume fragmentation over time.<br />
* On the ''fifth step'', open ''Advanced options'' and make sure that ''Virt Type'' is set to ''kvm'' (this is usually the preferred method). If additional hardware setup is required, select the ''Customize configuration before install'' option.<br />
<br />
==== Manage a domain ====<br />
<br />
Start a domain:<br />
<br />
$ virsh start ''domain''<br />
$ virt-viewer --connect qemu:///session ''domain''<br />
<br />
Gracefully attempt to shutdown a domain; force off a domain:<br />
<br />
$ virsh shutdown ''domain''<br />
$ virsh destroy ''domain''<br />
<br />
Autostart domain on libvirtd start:<br />
<br />
$ virsh autostart ''domain''<br />
$ virsh autostart ''domain'' --disable<br />
<br />
Shutdown domain on host shutdown:<br />
<br />
: Running domains can be automatically suspended/shutdown at host shutdown using the {{ic|libvirt-guests.service}} systemd service. This same service will resume/startup the suspended/shutdown domain automatically at host startup. Read {{ic|/etc/conf.d/libvirt-guests}} for service options.<br />
<br />
Edit a domain's XML configuration:<br />
<br />
$ virsh edit ''domain''<br />
<br />
{{note|Virtual Machines started directly by QEMU are not managable by libvirt tools.}}<br />
<br />
=== Networks ===<br />
<br />
A [https://jamielinux.com/docs/libvirt-networking-handbook/ decent overview of libvirt networking].<br />
<br />
By default, when the {{ic|libvird}} systemd service is started, a NAT bridge is created called ''default'' to allow external network connectivity (warning see: [[#"default" network bug]]). For other network connectivity needs, four network types exist that can be created to connect a domain to:<br />
<br />
* bridge — a virtual device; shares data directly with a physical interface. Use this if the host has ''static'' networking, it does not need to connect other domains, the domain requires full inbound and outbound trafficing, and the domain is running on a ''system''-level. See [[Network bridge]] on how to add a bridge additional to the default one. After creation, it needs to be specified in the respective guest's {{ic|.xml}} configuration file. <br />
* network — a virtual network; has ability to share with other domains. Use a virtual network if the host has ''dynamic'' networking (e.g. NetworkManager), or using wireless.<br />
* macvtap — connect directly to a host physical interface.<br />
* user — local ability networking. Use this only for a user ''session''.<br />
<br />
{{ic|virsh}} has the ability to create networking with numerous options for most users, however, it is easier to create network connectivity with a graphic user interface (like {{ic|virt-manager}}), or to do so on [[#Create a new domain using virt-install|creation with virt-install]].<br />
<br />
{{note|libvirt handles DHCP and DNS with {{pkg|dnsmasq}}, launching a separate instance for every virtual network. It also adds iptables rules for proper routing, and enables the {{ic|ip_forward}} kernel parameter.}}<br />
<br />
=== Snapshots ===<br />
<br />
Snapshots take the disk, memory, and device state of a domain at a point-of-time, and save it for future use. They have many uses, from saving a "clean" copy of an OS image to saving a domain's state before a potentially destructive operation. Snapshots are identified with a unique name.<br />
<br />
Snapshots are saved within the volume itself and the volume must be the format: qcow2 or raw. Snapshots use deltas so they have the potentiality to not take much space.<br />
<br />
==== Create a snapshot ====<br />
<br />
{{Out of date|Some of this data appears to be dated.}}<br />
<br />
Once a snapshot is taken it is saved as a new block device and the original snapshot is taken offline. Snapshots can be chosen from and also merged into another (even without shutting down the domain).<br />
<br />
Print a running domain's volumes (running domains can be printed with {{ic|virsh list}}):<br />
<br />
{{hc|# virsh domblklist ''domain''|<nowiki><br />
Target Source<br />
------------------------------------------------<br />
vda /vms/domain.img<br />
</nowiki>}}<br />
<br />
To see a volume's physical properties:<br />
<br />
{{hc|# qemu-img info /vms/domain.img|<nowiki><br />
image: /vms/domain.img<br />
file format: qcow2<br />
virtual size: 50G (53687091200 bytes)<br />
disk size: 2.1G<br />
cluster_size: 65536<br />
</nowiki>}}<br />
<br />
Create a disk-only snapshot (the option {{ic|--atomic}} will prevent the volume from being modified if snapshot creation fails):<br />
<br />
# virsh snapshot-create-as ''domain'' snapshot1 --disk-only --atomic<br />
<br />
List snapshots:<br />
<br />
{{hc|# virsh snapshot-list ''domain''|<nowiki><br />
Name Creation Time State<br />
------------------------------------------------------------<br />
snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot<br />
</nowiki>}}<br />
<br />
One can they copy the original image with {{ic|1=cp --sparse=true}} or {{ic|rsync -S}} and then merge the the original back into snapshot:<br />
<br />
# virsh blockpull --domain ''domain'' --path /vms/''domain''.snapshot1<br />
<br />
{{ic|domain.snapshot1}} becomes a new volume. After this is done the original volume ({{ic|domain.img}} and snapshot metadata can be deleted. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development (including {{ic|snapshot-revert feature}}, scheduled to be released sometime next year.<br />
<br />
=== Other management ===<br />
<br />
Connect to non-default hypervisor:<br />
<br />
$ virsh --connect xen:///<br />
virsh # uri<br />
xen:///<br />
<br />
Connect to the QEMU hypervisor over SSH; and the same with logging:<br />
<br />
$ virsh --connect qemu+ssh://''username''@''host''/system<br />
$ LIBVIRT_DEBUG=1 virsh --connect qemu+ssh://''username''@''host''/system<br />
<br />
Connect a graphic console over SSH:<br />
<br />
$ virt-viewer --connect qemu+ssh://''username''@''host''/system ''domain''<br />
$ virt-manager --connect qemu+ssh://''username''@''host''/system ''domain''<br />
<br />
{{Note|If you are having problems connecting to a remote RHEL server (or anything other than Arch, really), try the two workarounds mentioned in {{bug|30748}} and {{bug|22068}}.}}<br />
<br />
Connect to the VirtualBox hypervisor (''VirtualBox support in libvirt is not stable yet and may cause libvirtd to crash''):<br />
<br />
$ virsh --connect vbox:///system<br />
<br />
Network configurations:<br />
<br />
$ virsh -c qemu:///system net-list --all<br />
$ virsh -c qemu:///system net-dumpxml default<br />
<br />
== Python connectivity code ==<br />
<br />
The {{Pkg|libvirt-python}} package provides a {{Pkg|python2}} API in {{ic|/usr/lib/python2.7/site-packages/libvirt.py}}.<br />
<br />
General examples are given in {{ic|/usr/share/doc/libvirt-python-''your_libvirt_version''/examples/}}<br />
<br />
Unofficial example using {{Pkg|qemu}} and {{Pkg|openssh}}:<br />
<br />
#! /usr/bin/env python2<br />
# -*- coding: utf-8 -*-<br />
import socket<br />
import sys<br />
import libvirt<br />
if (__name__ == "__main__"):<br />
<nowiki>conn = libvirt.open("qemu+ssh://xxx/system")</nowiki><br />
print "Trying to find node on xxx"<br />
domains = conn.listDomainsID()<br />
for domainID in domains:<br />
domConnect = conn.lookupByID(domainID)<br />
if domConnect.name() == 'xxx-node':<br />
print "Found shared node on xxx with ID " + str(domainID)<br />
domServ = domConnect<br />
break<br />
<br />
== UEFI Support ==<br />
<br />
For UEFI support you need to install the OVMF packages [https://www.kraxel.org/repos/jenkins/edk2/ Gerd Hoffman's repository].<br />
<br />
Download and extract edk2.git-ovmf-x64 to /usr.<br />
<br />
[[Install]] {{Pkg|qemu}} and {{Pkg|rpmextract}}.<br />
<br />
{{bc|# rpmextract.sh edk2.git-ovmf-x64-0-20150223.b877.ga8577b3.noarch.rpm<br />
# cp -R ./usr/share/* /usr/share}}<br />
<br />
Then you will have to source the OVMF files in {{ic|1=/etc/libvirt/qemu.conf}}.<br />
Set the following details:<br />
nvram = [<br />
"/usr/share/edk2.git/ovmf-x64/OVMF_CODE-pure-efi.fd:/usr/share/edk2.git/ovmf-x64/OVMF_VARS-pure-efi.fd",<br />
]<br />
Then restart libvirtd:<br />
systemctl restart libvirtd.service<br />
<br />
== See also ==<br />
<br />
* [http://libvirt.org/drvqemu.html Official libvirt web site]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html Red Hat Virtualization Deployment and Administration Guide]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/index.html Red Hat Virtualization Tuning and Optimization Guide]<br />
* [http://docs.slackware.com/howtos:general_admin:kvm_libvirt Slackware KVM and libvirt]<br />
* [http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm IBM KVM]<br />
* [https://jamielinux.com/docs/libvirt-networking-handbook/ libvirt Networking Handbook]</div>IronOrionhttps://wiki.archlinux.org/index.php?title=Libvirt&diff=428167Libvirt2016-03-27T22:16:53Z<p>IronOrion: Added PCI passthrough via OVMF as related article for easier access to people wanting to set up a vm with PCI passthrough via OVMF</p>
<hr />
<div>{{DISPLAYTITLE:libvirt}}<br />
[[Category:Virtualization]]<br />
[[ja:libvirt]]<br />
[[zh-CN:Libvirt]]<br />
{{Related articles start}}<br />
{{Related|:Category:Hypervisors}}<br />
{{Related|:PCI passthrough via OVMF}}<br />
{{Related articles end}}<br />
<br />
Libvirt is collection of software that provides a convenient way to manage virtual machines and other virtualization functionality, such as storage and network interface management. These software pieces include a long term stable C API, a daemon (libvirtd), and a command line utility (virsh). A primary goal of libvirt is to provide a single way to manage multiple different virtualization providers/hypervisors, such as the [[QEMU|KVM/QEMU]], [[Xen]], [[LXC]], [http://openvz.org OpenVZ] or [[VirtualBox]] [[:Category:Hypervisors|hypervisors]] ([http://libvirt.org/drivers.html among others]).<br />
<br />
Some of the major libvirt features are:<br />
*'''VM management''': Various domain lifecycle operations such as start, stop, pause, save, restore, and migrate. Hotplug operations for many device types including disk and network interfaces, memory, and cpus.<br />
*'''Remote machine support''': All libvirt functionality is accessible on any machine running the libvirt daemon, including remote machines. A variety of network transports are supported for connecting remotely, with the simplest being SSH, which requires no extra explicit configuration.<br />
*'''Storage management''': Any host running the libvirt daemon can be used to manage various types of storage: create file images of various formats (qcow2, vmdk, raw, ...), mount NFS shares, enumerate existing LVM volume groups, create new LVM volume groups and logical volumes, partition raw disk devices, mount iSCSI shares, and much more.<br />
*'''Network interface management''': Any host running the libvirt daemon can be used to manage physical and logical network interfaces. Enumerate existing interfaces, as well as configure (and create) interfaces, bridges, vlans, and bond devices.<br />
*'''Virtual NAT and Route based networking''': Any host running the libvirt daemon can manage and create virtual networks. Libvirt virtual networks use firewall rules to act as a router, providing VMs transparent access to the host machines network.<br />
<br />
== Installation ==<br />
<br />
Because of its daemon/client architecture, libvirt needs only be installed on the machine which will host the virtualized system. Note that the server and client can be the same physical machine.<br />
<br />
=== Server ===<br />
<br />
[[Install]] the {{pkg|libvirt}} package, as well as at least one hypervisor:<br />
<br />
* As of 2015-02-01, {{ic|libvirtd}} '''requires''' {{Pkg|qemu}} to be installed on the system to start (see {{Bug|41888}}). Fortunately, the [http://libvirt.org/drvqemu.html libvirt KVM/QEMU driver] is the primary ''libvirt'' driver and if [[QEMU#Enabling_KVM|KVM is enabled]], fully virtualized, hardware accelerated guests will be available. See the [[QEMU]] article for more informations.<br />
<br />
* Other virtualization backends include [[LXC]], [[VirtualBox]] and [[Xen]]. See their respective page for installation instructions.<br />
:{{Note|The [http://libvirt.org/drvlxc.html libvirt LXC driver] has no dependency on the [[LXC]] userspace tools provided by {{Pkg|lxc}}, therefore there is no need to install it if planning on using this driver.}}<br />
:{{Warning|[[Xen]] support is available but not by default. You need to use the [[ABS]] to modify {{Pkg|libvirt}}'s [[PKGBUILD]] and build it without the {{ic|--without-xen}} option.}}<br />
<br />
Other supported hypervisors are listed [http://libvirt.org/drivers.html here].<br />
<br />
For network connectivity, install: <br />
<br />
* {{Pkg|ebtables}} '''and''' {{Pkg|dnsmasq}} for the [http://wiki.libvirt.org/page/VirtualNetworking#The_default_configuration default] NAT/DHCP networking.<br />
* {{Pkg|bridge-utils}} for bridged networking.<br />
* {{Pkg|openbsd-netcat}} for remote management over [[SSH]].<br />
<br />
=== Client ===<br />
<br />
The client is the user interface that will be used to manage and access the virtual machines.<br />
<br />
* ''virsh'' is a command line program for managing and configuring domains; it is included in the {{Pkg|libvirt}} package.<br />
* {{Pkg|virt-manager}} is a graphical user interface for managing virtual machines.<br />
* {{Pkg|virtviewer}} is a lightweight interface for interacting with the graphical display of virtualized guest OS.<br />
* {{Pkg|gnome-boxes}} is a simple GNOME 3 application to access remote or virtual systems.<br />
* {{AUR|virt-manager-qt5}}<br />
* {{AUR|libvirt-sandbox}} is an application sandbox toolkit.<br />
<br />
A list of libvirt-compatible software can be found [http://libvirt.org/apps.html here].<br />
<br />
== Configuration ==<br />
<br />
For '''''system'''''-level administration (i.e. global settings and image-''volume'' location), libvirt minimally requires [[#Set up authentication|setting up authorization]], and [[#Daemon|starting the daemon]].<br />
<br />
{{Note|For user-'''''session''''' administration, daemon setup and configuration is ''not'' required; authorization, however, is limited to local abilities; the front-end will launch a local instance of the '''libvirtd''' daemon.}}<br />
<br />
=== Set up authentication ===<br />
<br />
From [http://libvirt.org/auth.html#ACL_server_config libvirt: Connection authentication]:<br />
:The libvirt daemon allows the administrator to choose the authentication mechanisms used for client connections on each network socket independently. This is primarily controlled via the libvirt daemon master config file in {{ic|/etc/libvirt/libvirtd.conf}}. Each of the libvirt sockets can have its authentication mechanism configured independently. There is currently a choice of {{ic|none}}, {{ic|polkit}} and {{ic|sasl}}. <br />
<br />
Because {{Pkg|libvirt}} pulls {{Pkg|polkit}} as a dependency during installation, [[#Using polkit|polkit]] is used as the default value for the {{ic|unix_sock_auth}} parameter ([http://libvirt.org/auth.html#ACL_server_polkit source]). [[#Authenticate with file-based permissions|File-based permissions]] remain nevertheless available.<br />
<br />
==== Using polkit ====<br />
{{Note|A system reboot may be required before authenticating with {{ic|polkit}} works correctly.}}<br />
<br />
The ''libvirt'' daemon provides two [[Polkit#Actions|polkit actions]] in {{ic|/usr/share/polkit-1/actions/org.libvirt.unix.policy}}:<br />
* {{ic|org.libvirt.unix.manage}} for full management access (RW daemon socket), and<br />
* {{ic|org.libvirt.unix.monitor}} for monitoring only access (read-only socket).<br />
<br />
The default policy for the RW daemon socket will require to authenticate as an admin. This is akin to [[sudo]] auth, but does not require that the client application ultimately run as root. Default policy will still allow any application to connect to the RO socket.<br />
<br />
Arch defaults to consider anybody in the {{ic|wheel}} group as an administrator: this is defined in {{ic|/etc/polkit-1/rules.d/50-default.rules}} (see [[Polkit#Administrator identities]]). Therefore there is no need to create a new group and rule file '''if your user is a member of the {{ic|wheel}} group''': upon connection to the RW socket (e.g. via {{Pkg|virt-manager}}) you will be prompted for your user's password.<br />
<br />
{{Note|Prompting for a password relies on the presence of an [[Polkit#Authentication_agents|authentication agent]] on the system. Console users may face an issue with the default {{ic|pkttyagent}} agent which may or may not work properly.}}<br />
<br />
{{Tip|If you want to configure passwordless authentication, see [[Polkit#Bypass password prompt]].}}<br />
<br />
As of libvirt 1.2.16 (commit:[http://libvirt.org/git/?p=libvirt.git;a=commit;h=e94979e901517af9fdde358d7b7c92cc055dd50c]), members of the {{ic|libvirt}} group have passwordless access to the RW daemon socket by default. The easiest way to ensure your user has access is to ensure the libvirt group exists and they are a member of it. If you wish to change the group authorized to access the RW daemon socket to be the kvm group, create the following file:<br />
<br />
{{hc|/etc/polkit-1/rules.d/50-libvirt.rules|<nowiki><br />
/* Allow users in kvm group to manage the libvirt<br />
daemon without authentication */<br />
polkit.addRule(function(action, subject) {<br />
if (action.id == "org.libvirt.unix.manage" &&<br />
subject.isInGroup("kvm")) {<br />
return polkit.Result.YES;<br />
}<br />
});</nowiki><br />
}}<br />
<br />
Then [[Users_and_groups#Other_examples_of_user_management|add yourself]] to the {{ic|kvm}} group and relogin. Replace ''kvm'' with any group of your preference just make sure it exists and that your user is a member of it (see [[Users and groups]] for more information).<br />
<br />
Do not forget to relogin for group changes to take effect.<br />
<br />
==== Authenticate with file-based permissions ====<br />
<br />
To define file-based permissions for users in the ''libvirt'' group to manage virtual machines, uncomment and define:<br />
<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
#unix_sock_group = "libvirt"<br />
#unix_sock_ro_perms = "0777" # set to 0770 to deny non-group libvirt users<br />
#unix_sock_rw_perms = "0770"<br />
#auth_unix_ro = "none"<br />
#auth_unix_rw = "none"<br />
</nowiki>}}<br />
<br />
While some guides mention changed permissions of certain libvirt directories to ease management, keep in mind permissions are lost on package update. To edit these system directories, root user is expected.<br />
<br />
=== Daemon ===<br />
<br />
[[Start]] both {{ic|libvirtd.service}} and {{ic|virtlogd.service}}. Optionally [[enable]] {{ic|libvirtd.service}}. There is no need to enable {{ic|virtlogd.service}}, since {{ic|libvirtd.service}}, when enabled, also enables the {{ic|virtlogd.socket}} and {{ic|virtlockd.socket}} [[Systemd#Using_units|units]].<br />
<br />
=== Unencrypt TCP/IP sockets ===<br />
<br />
{{Warning|This method is used to help remote domain, connection speed for trusted networks. This is the least secure connection method. This should ''only'' be used for testing or use over a secure, private, and trusted network. SASL is not enabled here, so all TCP traffic is ''cleartext''. For real world ''always'' use enable SASL.}}<br />
<br />
Edit {{ic|/etc/libvirt/libvirtd.conf}}:<br />
{{hc|/etc/libvirt/libvirtd.conf|<nowiki><br />
listen_tls = 0<br />
listen_tcp = 1<br />
auth_tcp=none<br />
</nowiki>}}<br />
<br />
It is also necessary to start the server in listening mode by editing {{ic|/etc/conf.d/libvirtd}}:<br />
<br />
{{hc|/etc/conf.d/libvirtd|2=LIBVIRTD_ARGS="--listen"}}<br />
<br />
== Test ==<br />
<br />
To test if libvirt is working properly on a ''system'' level:<br />
<br />
$ virsh -c qemu:///system<br />
<br />
To test if libvirt is working properly for a user-''session'':<br />
<br />
$ virsh -c qemu:///session<br />
<br />
== Management ==<br />
<br />
Libvirt management is done mostly with three tools: {{Pkg|virt-manager}} (GUI), {{ic|virsh}}, and {{ic|guestfish}} (which is part of {{AUR|libguestfs}}).<br />
<br />
=== virsh ===<br />
<br />
The virsh program is for managing guest ''domains'' (virtual machines) and works well for scripting, virtualization administration. Though most virsh commands require root privileges to run due to the communication channels used to talk to the hypervisor, typical management, creation, and running of domains (like that done with VirtualBox) can be done as a regular user.<br />
<br />
Virsh includes an interactive terminal that can be entered if no commands are passed (options are allowed though): {{ic|virsh}}. The interactive terminal has support for tab completion.<br />
<br />
From the command line:<br />
<br />
$ virsh [option] <command> [argument]...<br />
<br />
From the interactive terminal:<br />
<br />
virsh # <command> [argument]...<br />
<br />
Help is available:<br />
<br />
$ virsh help [option*] or [group-keyword*]<br />
<br />
=== Storage pools ===<br />
<br />
A pool is a location where storage ''volumes'' can be kept. What libvirt defines as ''volumes'' others may define as "virtual disks" or "virtual machine images". Pool locations may be a directory, a network filesystem, or partition (this includes a [[LVM]]). Pools can be toggled active or inactive and allocated for space.<br />
<br />
On the ''system''-level, {{ic|/var/lib/libvirt/images/}} will be activated by default; on a user-''session'', {{ic|virt-manager}} creates {{ic|$HOME/VirtualMachines}}.<br />
<br />
Print active and inactive storage pools:<br />
<br />
$ virsh pool-list --all<br />
<br />
==== Create a new pool using virsh ====<br />
<br />
If wanted to ''add'' a storage pool, here are examples of the command form, adding a directory, and adding a LVM volume:<br />
<br />
$ virsh pool-define-as name type [source-host] [source-path] [source-dev] [source-name] [<target>] [--source-format format]<br />
$ virsh pool-define-as ''poolname'' dir - - - - /home/''username''/.local/libvirt/images<br />
$ virsh pool-define-as ''poolname'' fs - - ''/dev/vg0/images'' - ''mntpoint''<br />
<br />
The above command defines the information for the pool, to build it:<br />
<br />
$ virsh pool-build ''poolname''<br />
$ virsh pool-start ''poolname''<br />
$ virsh pool-autostart ''poolname''<br />
<br />
To remove it:<br />
<br />
$ virsh pool-undefine ''poolname''<br />
<br />
{{Tip|For LVM storage pools:<br />
* It is a good practice to dedicate a volume group to the storage pool only. <br />
* Choose a LVM volume group that differs from the pool name, otherwise when the storage pool is deleted the LVM group will be too.<br />
}}<br />
<br />
==== Create a new pool using virt-manager ====<br />
<br />
First, connect to a hypervisor (e.g. QEMU/KVM ''system'', or user-''session''). Then, right-click on a connection and select ''Details''; select the ''Storage'' tab, push the ''+'' button on the lower-left, and follow the wizard.<br />
<br />
=== Storage volumes ===<br />
<br />
Once the pool has been created, volumes can be created inside the pool. ''If building a new domain (virtual machine), this step can be skipped as a volume can be created in the domain creation process.''<br />
<br />
==== Create a new volume with virsh ====<br />
<br />
Create volume, list volumes, resize, and delete:<br />
$ virsh vol-create-as ''poolname'' ''volumename'' 10GiB --format aw|bochs|raw|qcow|qcow2|vmdk<br />
$ virsh vol-upload --pool ''poolname'' ''volumename'' ''volumepath''<br />
$ virsh vol-list ''poolname''<br />
$ virsh vol-resize --pool ''poolname'' ''volumename'' 12GiB<br />
$ virsh vol-delete --pool ''poolname'' ''volumename''<br />
$ virsh vol-dumpxml --pool ''poolname'' ''volumename'' # for details.<br />
<br />
==== virt-manager backing store type bug ====<br />
<br />
On newer versions of {{ic|virt-manager}} you can now specify a backing store to use when creating a new disk. This is very useful, in that you can have new domains be based on base images saving you both time and disk space when provisioning new virtual systems. There is a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1235406) in the current version of {{ic|virt-manager}} which causes {{ic|virt-manager}} to choose the wrong type of the backing image in the case where the backing image is a {{ic|qcow2}} type. In this case, it will errantly pick the backing type as {{ic|raw}}. This will cause the new image to be unable to read from the backing store, and effectively remove the utility of having a backing store at all.<br />
<br />
There is a workaround for this issue. {{ic|qemu-img}} has long been able to do this operation directly. If you wish to have a backing store for your new domain before this bug is fixed, you may use the following command.<br />
<br />
$ qemu-img create -f qcow2 -o backing_file=<path to backing image>,backing_fmt=qcow2 <disk name> <disk size><br />
<br />
Then you can use this image as the base for your new domain and it will use the backing store as a COW volume saving you time and disk space.<br />
<br />
=== Domains ===<br />
<br />
Virtual machines are called ''domains''. If working from the command line, use {{ic|virsh}} to list, create, pause, shutdown domains, etc. {{ic|virt-viewer}} can be used to view domains started with {{ic|virsh}}. Creation of domains is typically done either graphically with {{ic|virt-manager}} or with {{ic|virt-install}} (a command line program that is part of the {{pkg|virt-manager}} package).<br />
<br />
Creating a new domain typically involves using some installation media, such as an {{ic|.iso}} from the storage pool or an optical drive.<br />
<br />
Print active and inactive domains:<br />
<br />
# virsh list --all<br />
<br />
{{note|[[SELinux]] has a built-in exemption for libvirt that allows volumes in {{ic|/var/lib/libvirt/images/}} to be accessed. If using SELinux and there are issues with the volumes, ensure that volumes are in that directory, or ensure that other storage pools are correctly labeled.}}<br />
<br />
==== Create a new domain using virt-install ====<br />
<br />
For an extremely detailed domain (virtual machine) setup, it is easier to [[#Create a new domain using virt-manager]]. However, basics can easily be done with {{ic|virt-install}} and still run quite well. Minimum specifications are {{ic|--name}}, {{ic|--memory}}, guest storage ({{ic|--disk}}, {{ic|--filesystem}}, or {{ic|--nodisks}}), and an install method (generally an {{ic|.iso}} or CD). See {{ic|man virt-install}} for more details and information about unlisted options.<br />
<br />
Arch Linux install (two GiB, qcow2 format volume create; user-networking):<br />
<br />
$ virt-install \<br />
--name arch-linux_testing \<br />
--memory 1024 \ <br />
--vcpus=2,maxvcpus=4 \<br />
--cpu host \<br />
--cdrom $HOME/Downloads/arch-linux_install.iso \<br />
--disk size=2,format=qcow2 \<br />
--network user \<br />
--virt-type kvm<br />
<br />
Fedora testing (Xen hypervisor, non-default pool, do not originally view):<br />
<br />
$ virt-install \<br />
--connect xen:/// \<br />
--name fedora-testing \<br />
--memory 2048 \<br />
--vcpus=2 \<br />
--cpu=host \<br />
--cdrom /tmp/fedora20_x84-64.iso \<br />
--os-type=linux --os-variant=fedora20 \<br />
--disk pool=testing,size=4 \<br />
--network bridge=br0 \<br />
--graphics=vnc \<br />
--noautoconsole<br />
$ virt-viewer --connect xen:/// fedora-testing<br />
<br />
Windows:<br />
<br />
$ virt-install \<br />
--name=windows7 \<br />
--memory 2048 \<br />
--cdrom /dev/sr0 \<br />
--os-variant=win7 \<br />
--disk /mnt/storage/domains/windows7.qcow2,size=20GiB \<br />
--network network=vm-net \<br />
--graphics spice<br />
<br />
{{Tip|Run {{ic|1=osinfo-query --fields=name,short-id,version os}} to get argument for {{ic|--os-variant}}; this will help define some specifications for the domain. However, {{ic|--memory}} and {{ic|--disk}} will need to be entered; one can look within the appropriate {{ic|/usr/share/libosinfo/db/oses/''os''.xml}} if needing these specifications. After installing, it will likely be preferable to install the [http://www.spice-space.org/download.html Spice Guest Tools] that include the [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Virtualization_Host_Configuration_and_Guest_Installation_Guide/form-Virtualization_Host_Configuration_and_Guest_Installation_Guide-Para_virtualized_drivers-Mounting_the_image_with_virt_manager.html VirtIO drivers]. For a Windows VirtIO network driver there is also {{Aur|virtio-win}}. These drivers are referenced by a {{ic|1=<model type='virtio' />}} in the guest's {{ic|.xml}} configuration section for the device. A bit more information can also be found on the [[QEMU#Preparing_a_Windows_guest|QEMU article]].}}<br />
<br />
Import existing volume:<br />
<br />
$ virt-install \<br />
--name demo \<br />
--memory 512 \<br />
--disk /home/user/VMs/mydisk.img \<br />
--import<br />
<br />
==== Create a new domain using virt-manager ====<br />
<br />
First, connect to the hypervisor (e.g. QEMU/KVM ''system'' or user ''session''), right click on a connection and select ''New'', and follow the wizard.<br />
<br />
* On the ''fourth step'', de-selecting ''Allocate entire disk now'' will make setup quicker and can save disk space in the interum; ''however'', it may cause volume fragmentation over time.<br />
* On the ''fifth step'', open ''Advanced options'' and make sure that ''Virt Type'' is set to ''kvm'' (this is usually the preferred method). If additional hardware setup is required, select the ''Customize configuration before install'' option.<br />
<br />
==== Manage a domain ====<br />
<br />
Start a domain:<br />
<br />
$ virsh start ''domain''<br />
$ virt-viewer --connect qemu:///session ''domain''<br />
<br />
Gracefully attempt to shutdown a domain; force off a domain:<br />
<br />
$ virsh shutdown ''domain''<br />
$ virsh destroy ''domain''<br />
<br />
Autostart domain on libvirtd start:<br />
<br />
$ virsh autostart ''domain''<br />
$ virsh autostart ''domain'' --disable<br />
<br />
Shutdown domain on host shutdown:<br />
<br />
: Running domains can be automatically suspended/shutdown at host shutdown using the {{ic|libvirt-guests.service}} systemd service. This same service will resume/startup the suspended/shutdown domain automatically at host startup. Read {{ic|/etc/conf.d/libvirt-guests}} for service options.<br />
<br />
Edit a domain's XML configuration:<br />
<br />
$ virsh edit ''domain''<br />
<br />
{{note|Virtual Machines started directly by QEMU are not managable by libvirt tools.}}<br />
<br />
=== Networks ===<br />
<br />
A [https://jamielinux.com/docs/libvirt-networking-handbook/ decent overview of libvirt networking].<br />
<br />
By default, when the {{ic|libvird}} systemd service is started, a NAT bridge is created called ''default'' to allow external network connectivity (warning see: [[#"default" network bug]]). For other network connectivity needs, four network types exist that can be created to connect a domain to:<br />
<br />
* bridge — a virtual device; shares data directly with a physical interface. Use this if the host has ''static'' networking, it does not need to connect other domains, the domain requires full inbound and outbound trafficing, and the domain is running on a ''system''-level. See [[Network bridge]] on how to add a bridge additional to the default one. After creation, it needs to be specified in the respective guest's {{ic|.xml}} configuration file. <br />
* network — a virtual network; has ability to share with other domains. Use a virtual network if the host has ''dynamic'' networking (e.g. NetworkManager), or using wireless.<br />
* macvtap — connect directly to a host physical interface.<br />
* user — local ability networking. Use this only for a user ''session''.<br />
<br />
{{ic|virsh}} has the ability to create networking with numerous options for most users, however, it is easier to create network connectivity with a graphic user interface (like {{ic|virt-manager}}), or to do so on [[#Create a new domain using virt-install|creation with virt-install]].<br />
<br />
{{note|libvirt handles DHCP and DNS with {{pkg|dnsmasq}}, launching a separate instance for every virtual network. It also adds iptables rules for proper routing, and enables the {{ic|ip_forward}} kernel parameter.}}<br />
<br />
=== Snapshots ===<br />
<br />
Snapshots take the disk, memory, and device state of a domain at a point-of-time, and save it for future use. They have many uses, from saving a "clean" copy of an OS image to saving a domain's state before a potentially destructive operation. Snapshots are identified with a unique name.<br />
<br />
Snapshots are saved within the volume itself and the volume must be the format: qcow2 or raw. Snapshots use deltas so they have the potentiality to not take much space.<br />
<br />
==== Create a snapshot ====<br />
<br />
{{Out of date|Some of this data appears to be dated.}}<br />
<br />
Once a snapshot is taken it is saved as a new block device and the original snapshot is taken offline. Snapshots can be chosen from and also merged into another (even without shutting down the domain).<br />
<br />
Print a running domain's volumes (running domains can be printed with {{ic|virsh list}}):<br />
<br />
{{hc|# virsh domblklist ''domain''|<nowiki><br />
Target Source<br />
------------------------------------------------<br />
vda /vms/domain.img<br />
</nowiki>}}<br />
<br />
To see a volume's physical properties:<br />
<br />
{{hc|# qemu-img info /vms/domain.img|<nowiki><br />
image: /vms/domain.img<br />
file format: qcow2<br />
virtual size: 50G (53687091200 bytes)<br />
disk size: 2.1G<br />
cluster_size: 65536<br />
</nowiki>}}<br />
<br />
Create a disk-only snapshot (the option {{ic|--atomic}} will prevent the volume from being modified if snapshot creation fails):<br />
<br />
# virsh snapshot-create-as ''domain'' snapshot1 --disk-only --atomic<br />
<br />
List snapshots:<br />
<br />
{{hc|# virsh snapshot-list ''domain''|<nowiki><br />
Name Creation Time State<br />
------------------------------------------------------------<br />
snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot<br />
</nowiki>}}<br />
<br />
One can they copy the original image with {{ic|1=cp --sparse=true}} or {{ic|rsync -S}} and then merge the the original back into snapshot:<br />
<br />
# virsh blockpull --domain ''domain'' --path /vms/''domain''.snapshot1<br />
<br />
{{ic|domain.snapshot1}} becomes a new volume. After this is done the original volume ({{ic|domain.img}} and snapshot metadata can be deleted. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development (including {{ic|snapshot-revert feature}}, scheduled to be released sometime next year.<br />
<br />
=== Other management ===<br />
<br />
Connect to non-default hypervisor:<br />
<br />
$ virsh --connect xen:///<br />
virsh # uri<br />
xen:///<br />
<br />
Connect to the QEMU hypervisor over SSH; and the same with logging:<br />
<br />
$ virsh --connect qemu+ssh://''username''@''host''/system<br />
$ LIBVIRT_DEBUG=1 virsh --connect qemu+ssh://''username''@''host''/system<br />
<br />
Connect a graphic console over SSH:<br />
<br />
$ virt-viewer --connect qemu+ssh://''username''@''host''/system ''domain''<br />
$ virt-manager --connect qemu+ssh://''username''@''host''/system ''domain''<br />
<br />
{{Note|If you are having problems connecting to a remote RHEL server (or anything other than Arch, really), try the two workarounds mentioned in {{bug|30748}} and {{bug|22068}}.}}<br />
<br />
Connect to the VirtualBox hypervisor (''VirtualBox support in libvirt is not stable yet and may cause libvirtd to crash''):<br />
<br />
$ virsh --connect vbox:///system<br />
<br />
Network configurations:<br />
<br />
$ virsh -c qemu:///system net-list --all<br />
$ virsh -c qemu:///system net-dumpxml default<br />
<br />
== Python connectivity code ==<br />
<br />
The {{Pkg|libvirt-python}} package provides a {{Pkg|python2}} API in {{ic|/usr/lib/python2.7/site-packages/libvirt.py}}.<br />
<br />
General examples are given in {{ic|/usr/share/doc/libvirt-python-''your_libvirt_version''/examples/}}<br />
<br />
Unofficial example using {{Pkg|qemu}} and {{Pkg|openssh}}:<br />
<br />
#! /usr/bin/env python2<br />
# -*- coding: utf-8 -*-<br />
import socket<br />
import sys<br />
import libvirt<br />
if (__name__ == "__main__"):<br />
<nowiki>conn = libvirt.open("qemu+ssh://xxx/system")</nowiki><br />
print "Trying to find node on xxx"<br />
domains = conn.listDomainsID()<br />
for domainID in domains:<br />
domConnect = conn.lookupByID(domainID)<br />
if domConnect.name() == 'xxx-node':<br />
print "Found shared node on xxx with ID " + str(domainID)<br />
domServ = domConnect<br />
break<br />
<br />
== See also ==<br />
<br />
* [http://libvirt.org/drvqemu.html Official libvirt web site]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Deployment_and_Administration_Guide/index.html Red Hat Virtualization Deployment and Administration Guide]<br />
* [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Virtualization_Tuning_and_Optimization_Guide/index.html Red Hat Virtualization Tuning and Optimization Guide]<br />
* [http://docs.slackware.com/howtos:general_admin:kvm_libvirt Slackware KVM and libvirt]<br />
* [http://www-01.ibm.com/support/knowledgecenter/linuxonibm/liaat/liaatkvm.htm IBM KVM]<br />
* [https://jamielinux.com/docs/libvirt-networking-handbook/ libvirt Networking Handbook]</div>IronOrion