Difference between revisions of "KVM"

From ArchWiki
Jump to: navigation, search
(How to use KVM: remove note about qemu-kvm fork being merged into upstream QEMU)
m (zgrep VIRTIO instead of CONFIG_VIRTIO (the latter is omitting some modules))
 
(30 intermediate revisions by 15 users not shown)
Line 1: Line 1:
[[Category:Virtualization]]
+
[[Category:Hypervisors]]
 
[[Category:Kernel]]
 
[[Category:Kernel]]
 
[[it:KVM]]
 
[[it:KVM]]
 +
[[ja:KVM]]
 
[[zh-CN:KVM]]
 
[[zh-CN:KVM]]
{{Article summary start}}
+
{{Related articles start}}
{{Article summary text|This article covers checking for KVM support and some KVM-specific notes, features etc. It does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.}}
+
{{Related|:Category:Hypervisors}}
{{Article summary heading|Related}}
+
{{Related|Libvirt}}
{{Article summary wiki|QEMU}}
+
{{Related articles end}}
{{Article summary wiki|Libvirt}}
+
'''KVM''', Kernel-based Virtual Machine, is a [[Wikipedia:hypervisor|hypervisor]] built into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. Unlike native [[QEMU]], which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization via a kernel module.
{{Article summary wiki|VirtualBox}}
+
{{Article summary wiki|Xen}}
+
{{Article summary wiki|VMware}}
+
{{Article summary end}}
+
'''KVM''', Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to [[Xen]] in purpose but much simpler to get running. Unlike native [[QEMU]], which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions ([[Wikipedia:Hardware-assisted virtualization|HVM]]) for virtualization via a kernel module. KVM originally supported x86 and x86_64 architectures and has been ported to S/390, PowerPC, IA-64, and ARM (since Linux 3.9).
+
  
 
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status] for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.
 
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See [http://www.linux-kvm.org/page/Guest_Support_Status Guest Support Status] for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.
  
Differences among KVM, Xen, VMware, and QEMU can be found at the [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ].
+
Differences between KVM and [[Xen]], [[VMware]], or QEMU can be found at the [http://www.linux-kvm.org/page/FAQ#General_KVM_information KVM FAQ].
 +
 
 +
This article does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.
  
 
== Checking support for KVM ==
 
== Checking support for KVM ==
Line 28: Line 26:
  
 
You can also run:
 
You can also run:
  $ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
+
  $ egrep --color=auto 'vmx|svm|0xc0f' /proc/cpuinfo
  
 
If nothing is displayed after running that command, then your processor does '''not''' support hardware virtualization, and you will '''not''' be able to use KVM.
 
If nothing is displayed after running that command, then your processor does '''not''' support hardware virtualization, and you will '''not''' be able to use KVM.
  
=== Kernel support ===
+
{{Note|You may need to enable virtualization support in your BIOS.}}
  
 +
=== Kernel support ===
 +
Arch Linux kernels provide the appropriate [[kernel modules]] to support KVM and VIRTIO.
 +
==== KVM modules ====
 
You can check if necessary modules ({{ic|kvm}} and one of {{ic|kvm_amd}}, {{ic|kvm_intel}}) are available in your kernel with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):
 
You can check if necessary modules ({{ic|kvm}} and one of {{ic|kvm_amd}}, {{ic|kvm_intel}}) are available in your kernel with the following command (assuming your kernel is built with {{ic|CONFIG_IKCONFIG_PROC}}):
  $ zgrep KVM /proc/config.gz
+
  $ zgrep CONFIG_KVM /proc/config.gz
 +
If the module is not set equal to {{ic|y}} or {{ic|m}}, then the module is '''not''' available.
 +
 
 +
==Para-virtualized devices==
 +
Para-virtualization provides a fast and efficient means of communication for guests to use devices on the host machine. KVM provides para-virtualized devices to virtual machines using the Virtio API as a layer between the hypervisor and guest.
 +
 
 +
All virtio devices have two parts: the host device and the guest driver.
  
{{Note|Arch Linux kernels provide the appropriate [[Kernel_modules|kernel modules]] to support KVM.}}
+
=== VIRTIO modules ===
 +
Use the following command to check if needed modules are available:
 +
$ zgrep VIRTIO /proc/config.gz
  
 
=== Loading kernel modules ===
 
=== Loading kernel modules ===
  
You need to load {{ic|kvm}} module and one of {{ic|kvm_amd}} and {{ic|kvm_intel}} depending on the manufacturer of the VM host's CPU. See [[Kernel modules#Loading]] and [[Kernel modules#Manual module handling]] for information about loading kernel modules.
+
First, check if the kernel modules are automatically loaded. This should be the case with recent versions of [[udev]].
 +
$ lsmod | grep kvm
 +
$ lsmod | grep virtio
  
If modprobing {{Ic|kvm_intel}} or {{Ic|kvm_amd}} fails but modprobing {{Ic|kvm}} succeeds, (and {{ic|lscpu}} claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from {{Ic|dmesg}} after having failed to modprobe will tell.
+
In case the above commands return nothing, you need to [[Kernel modules#Loading|load]] kernel modules.
 +
{{Tip|
 +
If modprobing {{Ic|kvm_intel}} or {{Ic|kvm_amd}} fails but modprobing {{Ic|kvm}} succeeds, (and {{ic|lscpu}} claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from {{Ic|dmesg}} after having failed to modprobe will tell.}}
  
{{Note|Newer versions of [[udev]] should load these modules automatically, so manual intervention is not required.}}
+
=== List of para-virtualized devices ===
 +
 
 +
* network device (virtio-net)
 +
* block device (virtio-blk)
 +
* controller device (virtio-scsi)
 +
* serial device (virtio-serial)
 +
* balloon device (virtio-balloon)
  
 
== How to use KVM ==
 
== How to use KVM ==
Line 57: Line 76:
  
 
{{Expansion|Is it possible also with {{ic|kvm_amd}}?}}
 
{{Expansion|Is it possible also with {{ic|kvm_amd}}?}}
 +
 +
Nested virtualization enables existing virtual machines to be run on third-party hypervisors and on other clouds without any modifications to the original virtual machines or their networking.
  
 
On host, enable nested feature for {{ic|kvm_intel}}:
 
On host, enable nested feature for {{ic|kvm_intel}}:
Line 76: Line 97:
  
 
Boot VM and check if vmx flag is present:
 
Boot VM and check if vmx flag is present:
  $ grep vmx /proc/cpuinfo
+
  $ egrep --color=auto 'vmx|svm' /proc/cpuinfo
  
=== Live snapshots ===
+
=== Alternative Networking with SSH tunnels ===
 
+
{{Merge|libvirt|{{ic|virsh}} is part of {{Pkg|libvirt}}}}
+
 
+
A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.
+
 
+
Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.
+
 
+
Here's how it works.
+
 
+
Current running vm
+
# virsh list --all
+
Id    Name                          State
+
----------------------------------------------------
+
3    archey                            running
+
 
+
List all its current images
+
# virsh domblklist archey
+
Target    Source
+
------------------------------------------------
+
vda        /vms/archey.img
+
 
+
Notice the image file properties
+
# qemu-img info /vms/archey.img
+
image: /vms/archey.img
+
file format: qcow2
+
virtual size: 50G (53687091200 bytes)
+
disk size: 2.1G
+
cluster_size: 65536
+
 
+
Create a disk-only snapshot. The switch {{ic|--atomic}} makes sure that the VM is not modified if snapshot creation fails.
+
# virsh snapshot-create-as archey snapshot1 --disk-only --atomic
+
 
+
List if you want to see the snapshots
+
# virsh snapshot-list archey
+
Name                Creation Time            State
+
------------------------------------------------------------
+
snapshot1          2012-10-21 17:12:57 -0700 disk-snapshot
+
 
+
Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".
+
# qemu-img info /vms/archey.snapshot1
+
image: /vms/archey.snapshot1
+
file format: qcow2
+
virtual size: 50G (53687091200 bytes)
+
disk size: 18M
+
cluster_size: 65536
+
backing file: /vms/archey.img
+
 
+
At this point, you can go ahead and copy the original image with {{ic|1=cp -sparse=true}} or {{ic|rsync -S}}.
+
Then you can merge the original image back into the snapshot.
+
# virsh blockpull --domain archey --path /vms/archey.snapshot1
+
 
+
Now that you have pulled the blocks out of original image, the file {{ic|/vms/archey.snapshot1}} becomes the new disk image. Check its disk size to see what it means. After that is done, the original image {{ic|/vms/archey.img}} and the snapshot metadata can be deleted safely. The {{ic|virsh blockcommit}} would work opposite to {{ic|blockpull}} but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.
+
 
+
This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.
+
 
+
=== Poor Man's Networking ===
+
  
 
{{Merge|QEMU|This section is not KVM-specific, it's generally applicable to all QEMU VMs.}}
 
{{Merge|QEMU|This section is not KVM-specific, it's generally applicable to all QEMU VMs.}}
Line 146: Line 111:
 
* Setup authentication
 
* Setup authentication
  
See: [[SSH]] for the setup of SSH, especially [[SSH#Forwarding_Other_Ports]].
+
See: [[SSH]] for the setup of SSH, especially [[SSH#Forwarding other ports]].
  
 
When using the default user network stack, the host is reachable at address 10.0.2.2.
 
When using the default user network stack, the host is reachable at address 10.0.2.2.
Line 169: Line 134:
  
 
{{Accuracy|With systemd, {{ic|hugetlbfs}} is mounted on {{ic|/dev/hugepages}} by default, but with mode 0755 and root's uid and gid.}}
 
{{Accuracy|With systemd, {{ic|hugetlbfs}} is mounted on {{ic|/dev/hugepages}} by default, but with mode 0755 and root's uid and gid.}}
{{Merge|QEMU|{{Pkg|qemu-kvm}} no longer exists as all of its features have been merged into {{Pkg|qemu}}. After the above issue is cleared, I suggest merging this section into [[QEMU]].}}
+
{{Merge|QEMU|qemu-kvm no longer exists as all of its features have been merged into {{Pkg|qemu}}. After the above issue is cleared, I suggest merging this section into [[QEMU]].}}
  
 
You may also want to enable hugepages to improve the performance of your virtual machine.
 
You may also want to enable hugepages to improve the performance of your virtual machine.
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not create it.  
+
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory {{ic|/dev/hugepages}}. If not, create it.  
Now we need the right permissions to use this directory. Check if the group {{ic|kvm}} exist and if you are member of this group. This should be the case if you already have a running virtual machine.
+
Now we need the right permissions to use this directory.
{{hc|$ getent group kvm|
+
kvm:x:78:USERNAMES
+
}}
+
  
 
Add to your {{ic|/etc/fstab}}:
 
Add to your {{ic|/etc/fstab}}:
Line 189: Line 151:
  
 
Now you can calculate how many hugepages you need. Check how large your hugepages are:
 
Now you can calculate how many hugepages you need. Check how large your hugepages are:
  $ cat /proc/meminfo | grep Hugepagesize
+
  $ grep Hugepagesize /proc/meminfo
  
 
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
 
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
Line 195: Line 157:
  
 
If you had enough free memory you should see:
 
If you had enough free memory you should see:
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages_Total|
+
{{hc|$ grep HugePages_Total /proc/meminfo |
 
HugesPages_Total:  550
 
HugesPages_Total:  550
 
}}
 
}}
Line 205: Line 167:
  
 
Now you can check, while your virtual machine is running, how many pages are used:
 
Now you can check, while your virtual machine is running, how many pages are used:
{{hc|$ cat /proc/meminfo <nowiki>|</nowiki> grep HugePages|
+
{{hc|$ grep HugePages /proc/meminfo |
 
HugePages_Total:    550
 
HugePages_Total:    550
 
HugePages_Free:      48
 
HugePages_Free:      48
Line 212: Line 174:
 
}}
 
}}
  
Now that everything seems to work you can enable hugepages by default if you like. Add to your {{ic|/etc/sysctl.conf}}:
+
Now that everything seems to work you can enable hugepages by default if you like. Add to your {{ic|/etc/sysctl.d/40-hugepage.conf}}:
 
  vm.nr_hugepages = 550
 
  vm.nr_hugepages = 550
  

Latest revision as of 14:32, 5 January 2016

Related articles

KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.

Differences between KVM and Xen, VMware, or QEMU can be found at the KVM FAQ.

This article does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.

Checking support for KVM

Hardware support

KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:

$ lscpu

Your processor supports virtualization only if there is a line telling you so.

You can also run:

$ egrep --color=auto 'vmx|svm|0xc0f' /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.

Note: You may need to enable virtualization support in your BIOS.

Kernel support

Arch Linux kernels provide the appropriate kernel modules to support KVM and VIRTIO.

KVM modules

You can check if necessary modules (kvm and one of kvm_amd, kvm_intel) are available in your kernel with the following command (assuming your kernel is built with CONFIG_IKCONFIG_PROC):

$ zgrep CONFIG_KVM /proc/config.gz

If the module is not set equal to y or m, then the module is not available.

Para-virtualized devices

Para-virtualization provides a fast and efficient means of communication for guests to use devices on the host machine. KVM provides para-virtualized devices to virtual machines using the Virtio API as a layer between the hypervisor and guest.

All virtio devices have two parts: the host device and the guest driver.

VIRTIO modules

Use the following command to check if needed modules are available:

$ zgrep VIRTIO /proc/config.gz

Loading kernel modules

First, check if the kernel modules are automatically loaded. This should be the case with recent versions of udev.

$ lsmod | grep kvm
$ lsmod | grep virtio

In case the above commands return nothing, you need to load kernel modules.

Tip: If modprobing kvm_intel or kvm_amd fails but modprobing kvm succeeds, (and lscpu claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from dmesg after having failed to modprobe will tell.

List of para-virtualized devices

  • network device (virtio-net)
  • block device (virtio-blk)
  • controller device (virtio-scsi)
  • serial device (virtio-serial)
  • balloon device (virtio-balloon)

How to use KVM

See the main article: QEMU.

Tips and tricks

Note: See QEMU#Tips and tricks and QEMU#Troubleshooting for general tips and tricks.

Nested virtualization

Tango-view-fullscreen.pngThis article or section needs expansion.Tango-view-fullscreen.png

Reason: Is it possible also with kvm_amd? (Discuss in Talk:KVM#)

Nested virtualization enables existing virtual machines to be run on third-party hypervisors and on other clouds without any modifications to the original virtual machines or their networking.

On host, enable nested feature for kvm_intel:

# modprobe -r kvm_intel
# modprobe kvm_intel nested=1

To make it permanent (see Kernel modules#Setting module options):

/etc/modprobe.d/modprobe.conf
options kvm_intel nested=1

Verify that feature is activated:

$ systool -m kvm_intel -v | grep nested
    nested              = "Y"

Run guest VM with following command:

$ qemu-system-x86_64 -enable-kvm -cpu host

Boot VM and check if vmx flag is present:

$ egrep --color=auto 'vmx|svm' /proc/cpuinfo

Alternative Networking with SSH tunnels

Merge-arrows-2.pngThis article or section is a candidate for merging with QEMU.Merge-arrows-2.png

Notes: This section is not KVM-specific, it's generally applicable to all QEMU VMs. (Discuss in Talk:KVM#)

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

The basic steps are as follows:

  • Setup an SSH server in the host OS
  • (optional) Create a designated user used for the tunneling (e.g. tunneluser)
  • Install SSH in the VM
  • Setup authentication

See: SSH for the setup of SSH, especially SSH#Forwarding other ports.

When using the default user network stack, the host is reachable at address 10.0.2.2.

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: Usage of /etc/rc.local is discouraged. This should be a proper systemd service file. (Discuss in Talk:KVM#)

If everything works and you can SSH into the host, simply add something like the following to your /etc/rc.local

# Local SSH Server
echo "Starting SSH tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -R 2213:127.0.0.1:22 -f
# Random remote port (e.g. from another VM)
echo "Starting random tunnel"
sudo -u vmuser ssh tunneluser@10.0.2.2 -N -L 2345:127.0.0.1:2345 -f

In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: Isn't this option enough? I think it should have the same effect: -redir tcp:2222:10.0.2.15:22 (it redirects port 2222 from host to 10.0.2.15:22, where 10.0.2.15 is guest's IP address. (Discuss in Talk:KVM#)

Enabling huge pages

Tango-inaccurate.pngThe factual accuracy of this article or section is disputed.Tango-inaccurate.png

Reason: With systemd, hugetlbfs is mounted on /dev/hugepages by default, but with mode 0755 and root's uid and gid. (Discuss in Talk:KVM#)

Merge-arrows-2.pngThis article or section is a candidate for merging with QEMU.Merge-arrows-2.png

Notes: qemu-kvm no longer exists as all of its features have been merged into qemu. After the above issue is cleared, I suggest merging this section into QEMU. (Discuss in Talk:KVM#)

You may also want to enable hugepages to improve the performance of your virtual machine. With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory /dev/hugepages. If not, create it. Now we need the right permissions to use this directory.

Add to your /etc/fstab:

hugetlbfs       /dev/hugepages  hugetlbfs       mode=1770,gid=78        0 0

Of course the gid must match that of the kvm group. The mode of 1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure /dev/hugepages is mounted properly:

# umount /dev/hugepages
# mount /dev/hugepages
$ mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)

Now you can calculate how many hugepages you need. Check how large your hugepages are:

$ grep Hugepagesize /proc/meminfo

Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:

# echo 550 > /proc/sys/vm/nr_hugepages

If you had enough free memory you should see:

$ grep HugePages_Total /proc/meminfo 
HugesPages_Total:  550

If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):

$ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]

Note the -mem-path parameter. This will make use of the hugepages.

Now you can check, while your virtual machine is running, how many pages are used:

$ grep HugePages /proc/meminfo 
HugePages_Total:     550
HugePages_Free:       48
HugePages_Rsvd:        6
HugePages_Surp:        0

Now that everything seems to work you can enable hugepages by default if you like. Add to your /etc/sysctl.d/40-hugepage.conf:

vm.nr_hugepages = 550

See also:

See also