KVM, Kernel-based Virtual Machine, is a hypervisor built into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. Unlike native QEMU, which uses emulation, KVM is a special operating mode of QEMU that uses CPU extensions (HVM) for virtualization via a kernel module.
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status for more information.) Each virtual machine has private virtualized hardware: a network card, disk, graphics card, etc.
This article does not cover features common to multiple emulators using KVM as a backend. You should see related articles for such information.
- 1 Checking support for KVM
- 2 Para-virtualized devices
- 3 How to use KVM
- 4 Tips and tricks
- 5 See also
Checking support for KVM
KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:
Your processor supports virtualization only if there is a line telling you so.
You can also run:
$ egrep --color=auto 'vmx|svm|0xc0f' /proc/cpuinfo
If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.
Arch Linux kernels provide the appropriate kernel modules to support KVM and VIRTIO.
You can check if necessary modules (
kvm and one of
kvm_intel) are available in your kernel with the following command (assuming your kernel is built with
$ zgrep CONFIG_KVM /proc/config.gz
If the module is not set equal to
m, then the module is not available.
Para-virtualization provides a fast and efficient means of communication for guests to use devices on the host machine. KVM provides para-virtualized devices to virtual machines using the Virtio API as a layer between the hypervisor and guest.
All virtio devices have two parts: the host device and the guest driver.
Use the following command to check if needed modules are available:
$ zgrep VIRTIO /proc/config.gz
Loading kernel modules
First, check if the kernel modules are automatically loaded. This should be the case with recent versions of udev.
$ lsmod | grep kvm $ lsmod | grep virtio
In case the above commands return nothing, you need to load kernel modules.
List of para-virtualized devices
- network device (virtio-net)
- block device (virtio-blk)
- controller device (virtio-scsi)
- serial device (virtio-serial)
- balloon device (virtio-balloon)
How to use KVM
See the main article: QEMU.
Tips and tricks
Nested virtualization enables existing virtual machines to be run on third-party hypervisors and on other clouds without any modifications to the original virtual machines or their networking.
On host, enable nested feature for
# modprobe -r kvm_intel # modprobe kvm_intel nested=1
To make it permanent (see Kernel modules#Setting module options):
options kvm_intel nested=1
Verify that feature is activated:
$ systool -m kvm_intel -v | grep nested
nested = "Y"
Run guest VM with following command:
$ qemu-system-x86_64 -enable-kvm -cpu host
Boot VM and check if vmx flag is present:
$ egrep --color=auto 'vmx|svm' /proc/cpuinfo
Alternative Networking with SSH tunnels
Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.
The basic steps are as follows:
- Setup an SSH server in the host OS
- (optional) Create a designated user used for the tunneling (e.g. tunneluser)
- Install SSH in the VM
- Setup authentication
When using the default user network stack, the host is reachable at address 10.0.2.2.
If everything works and you can SSH into the host, simply add something like the following to your
# Local SSH Server echo "Starting SSH tunnel" sudo -u vmuser ssh email@example.com -N -R 2213:127.0.0.1:22 -f # Random remote port (e.g. from another VM) echo "Starting random tunnel" sudo -u vmuser ssh firstname.lastname@example.org -N -L 2345:127.0.0.1:2345 -f
In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.
This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.
Enabling huge pages
You may also want to enable hugepages to improve the performance of your virtual machine.
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory
/dev/hugepages. If not, create it.
Now we need the right permissions to use this directory.
Add to your
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0
Of course the gid must match that of the
kvm group. The mode of
1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure
/dev/hugepages is mounted properly:
# umount /dev/hugepages # mount /dev/hugepages $ mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)
Now you can calculate how many hugepages you need. Check how large your hugepages are:
$ grep Hugepagesize /proc/meminfo
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
# echo 550 > /proc/sys/vm/nr_hugepages
If you had enough free memory you should see:
$ grep HugePages_Total /proc/meminfo
If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):
$ qemu-system-x86_64 -enable-kvm -m 1024 -mem-path /dev/hugepages -hda <disk_image> [...]
-mem-path parameter. This will make use of the hugepages.
Now you can check, while your virtual machine is running, how many pages are used:
$ grep HugePages /proc/meminfo
HugePages_Total: 550 HugePages_Free: 48 HugePages_Rsvd: 6 HugePages_Surp: 0
Now that everything seems to work you can enable hugepages by default if you like. Add to your
vm.nr_hugepages = 550