KVM, Kernel-based Virtual Machine, is a hypervisor built right into the Linux kernel. It is similar to Xen in purpose but much simpler to get running. To start using the hypervisor, just load the appropriate
kvm kernel modules and the hypervisor is up. As with Xen's full virtualization, in order for KVM to work, you must have a processor that supports Intel's VT-x extensions or AMD's AMD-V extensions.
Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status). Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. See KVM Howto.
- 1 Get the packages
- 2 Set up kernel modules
- 3 How to use KVM
- 4 Paravirtualized guests (virtio)
- 5 Enabling KSM
- 6 Enable HugePages
- 7 Bridged Networking
- 8 Tips and tricks
Get the packages
Arch Linux kernels provide the appropriate kernel modules to support KVM.
You can check if your kernel supports KVM with the following command (assuming your kernel is built with
$ zgrep KVM /proc/config.gz
KVM requires that the virtual machine host's processor has virtualization support (named VT-x for Intel processors and AMD-V for AMD processors). You can check whether your processor supports hardware virtualization with the following command:
You processor supports virtualization only if there is a line telling you so.
You can also run:
$ grep -E "(vmx|svm)" --color=always /proc/cpuinfo
If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use KVM.
Set up kernel modules
First, you need to add your user account into the
kvm group to use the
# gpasswd -a <Your_Login_Name> kvm
Secondly, you have to choose one of the following depending on the manufacturer of the VM host's CPU.
- If you have Intel's VT-x extensions, modprobe the
# modprobe kvm_intel
- If you have AMD's AMD-V (code name "Pacifica") extensions, modprobe the
# modprobe kvm_amd
kvm_amd fails but modprobing
kvm succeeds, (and
lscpu claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from
dmesg after having failed to modprobe will tell.
If you want these modules to persist, see Kernel_modules#Loading.
How to use KVM
Paravirtualized guests (virtio)
KVM offers guests the ability to use paravirtualized block and network devices, which leads to better performance and lower overhead.
For Windows, a paravirtualized network driver can be obtained here.
FreeBSD has the ability to use virtio drivers since 10.0 (unreleased). A backport of the drivers are available in the port
emulators/virtio-kmod for FreeBSD 8.3 and 9.0.
A virtio block device requires the option
-drive instead of the simple
$ qemu-kvm -boot order=c -drive file=drive.img,if=virtio
Almost the same goes for the network:
$ qemu-kvm -net nic,model=virtio
Preparing an (Arch) Linux guest
To use virtio devices after an Arch Linux guest has been installed, the following modules can be loaded in the guest:
virtio_ring (for 32-bit guests, the specific "virtio" module is not necessary).
If you want to boot from a virtio disk, the initial ramdisk must be rebuilt. Add the appropriate modules in
/etc/mkinitcpio.conf like this:
MODULES="virtio_blk virtio_pci virtio_net"
and rebuild the initial ramdisk:
# mkinitcpio -p linux
Virtio disks are recognized with the prefix
vdb, etc.); therefore, changes must be made in at least
/boot/grub/grub.cfg when booting from a virtio disk.
Further information on paravirtualization with KVM can be found here.
Preparing a Windows guest
Preparing a Windows guest for running with a virtio disk driver is a bit tricky.
In your KVM host (running Arch Linux), download the virtio disk driver from the Fedora repository.
Now you need to create a new disk image, which fill force Windows to search for the driver. To do it, stop the virtual machine if its running and issue the following command:
$ qemu-img create -f qcow2 fake.img 1G
Run the original Windows guest (still in the IDE mode). Add the fake disk and a CD-ROM with the driver.
$ qemu-kvm -drive file=windows.img,if=ide -m 512 -drive file=fake.img,if=virtio -cdrom virtio-win-0.1-30.iso -vga std
If you have problems booting windows.img ISO image, or the virtio cd drivers not being detected, use this command.
$ qemu-kvm -drive file=fake.img,if=virtio -m 512 -boot d -drive file=windows.img,media=cdrom -drive file=virtio-win-0.1-30.iso,media=cdrom
Windows will detect the fake disk and try to find a driver for it. If it fails, go to Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click "Update driver" and select the virtual CD-ROM. Don't forget to mark the checkbox which says to search for directories recursively.
When the installation is successful, you can turn off the virtual machine and launch it again, now with the
$ qemu-kvm -drive file=windows.img,if=virtio -m 512 -vga std
Preparing virtio networkdrivers is a bit easier, simply add the
-net argument as explained above.
$ qemu-kvm -drive file=windows.img,if=virtio -m 512 -vga std -net nic,model=virtio -cdrom virtio-win-cdrom virtio-win-0.1-30.iso
Then install the virtio drivers from the disk you downloaded; Go to the Device Manager, locate the network adapter with an exclamation mark icon (should be open), click "Update driver" and select the virtual CD-ROM. Don't forget to mark the checkbox which says to search for directories recursively.
Preparing a FreeBSD guest
emulators/virtio-kmod port if you are using FreeBSD 8.3 or later up until 10.0-CURRENT where they are included into the kernel. After installation, add the following to your
virtio_loader="YES" virtio_pci_load="YES" virtio_blk_load="YES" if_vtnet_load="YES" virtio_balloon_load="YES"
Then modify your
/etc/fstab by doing the following:
sed -i/etc/fstab.bak "s/ad/vtbd/g" /etc/fstab
And verify that
/etc/fstab is consistent. If anything goes wrong, just boot into a rescue CD and copy
/etc/fstab.bak back to
Kernel Samepage Merging (KSM) is a feature of the Linux kernel introduced in the 2.6.32 kernel. KSM allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. For KVM, the KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.
There should be a
/sys/kernel/mm/ksm/ directory containing several files. You can turn KSM on or off by echoing a
0 (respectively) to
# echo 1 > /sys/kernel/mm/ksm/run
Or set it up by creating the file
w /sys/kernel/mm/ksm/run - - - - 1
If KSM is running, and there are pages to be merged (i.e. more than one similar VM is running), then
/sys/kernel/mm/ksm/pages_shared should be non-zero. From the kernel documentation in
The effectiveness of KSM and MADV_MERGEABLE is shown in /sys/kernel/mm/ksm/: pages_shared - how many shared unswappable kernel pages KSM is using pages_sharing - how many more sites are sharing them i.e. how much saved pages_unshared - how many pages unique but repeatedly checked for merging pages_volatile - how many pages changing too fast to be placed in a tree full_scans - how many times all mergeable areas have been scanned A high ratio of pages_sharing to pages_shared indicates good sharing, but a high ratio of pages_unshared to pages_sharing indicates wasted effort. pages_volatile embraces several different kinds of activity, but a high proportion there would also indicate poor use of madvise MADV_MERGEABLE.
An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory.
# grep . /sys/kernel/mm/ksm/*
/sys/kernel/mm/ksm/full_scans:151 /sys/kernel/mm/ksm/max_kernel_pages:246793 /sys/kernel/mm/ksm/pages_shared:92112 /sys/kernel/mm/ksm/pages_sharing:131355 /sys/kernel/mm/ksm/pages_to_scan:100 /sys/kernel/mm/ksm/pages_unshared:123942 /sys/kernel/mm/ksm/pages_volatile:1182 /sys/kernel/mm/ksm/run:1 /sys/kernel/mm/ksm/sleep_millisecs:20
You may also want to enable hugepages to improve the performance of your virtual machine.
With an up to date Arch Linux and a running KVM you probably already have everything you need. Check if you have the directory
/dev/hugepages. If not create it.
Now we need the right permissions to use this directory. Check if the group
kvm exist and if you are member of this group. This should be the case if you already have a running virtual machine.
$ getent group kvm
Add to your
hugetlbfs /dev/hugepages hugetlbfs mode=1770,gid=78 0 0
Of course the gid must match that of the
kvm group. The mode of
1770 allows anyone in the group to create files but not unlink or rename each other's files. Make sure
/dev/hugepages is mounted properly:
# umount /dev/hugepages # mount /dev/hugepages $ mount | grep huge
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,mode=1770,gid=78)
Now you can calculate how many hugepages you need. Check how large your hugepages are:
$ cat /proc/meminfo | grep Hugepagesize
Normally that should be 2048 kB ≙ 2 MB. Let's say you want to run your virtual machine with 1024 MB. 1024 / 2 = 512. Add a few extra so we can round this up to 550. Now tell your machine how many hugepages you want:
# echo 550 > /proc/sys/vm/nr_hugepages
If you had enough free memory you should see:
$ cat /proc/meminfo | grep HugePages_Total
If the number is smaller, close some applications or start your virtual machine with less memory (number_of_pages x 2):
$ kvm -m 1024 -mem-path /dev/hugepages [-hda yourimage.img] [-your_other_options]
-mem-path parameter. This will make use of the hugepages.
You can check now, while your virtual machine is running, how many pages are used:
$ cat /proc/meminfo | grep HugePages
HugePages_Total: 550 HugePages_Free: 48 HugePages_Rsvd: 6 HugePages_Surp: 0
Now that everything seems to work you can enable hugepages by default if you like. Add to your
vm.nr_hugepages = 550
Bridged networking is used when you want your VM to be on the same network as your host machine. This will allow it to get a static or DHCP IP address on your network, and then you can access it using that IP address from anywhere on your LAN. The preferred method for setting up bridged networking for KVM is to use thepackage. You will also need to install .
For more information, see: Netcfg Tips#Configuring a bridge for use with virtual machines (VMs)
You can follow this page to configure the bridge: Libvirt#Bridged Networking
If you are using, it is recommended for performance and security reasons to disable the firewall on the bridge:
# cat >> /etc/sysctl.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 EOF # sysctl -p /etc/sysctl.conf
Alternatively, you can configureto allow all traffic to be forwarded across the bridge by adding a rule like this:
-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
Tips and tricks
A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.
Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.
Here's how it works.
Current running vm
# virsh list --all Id Name State ---------------------------------------------------- 3 archey running
List all its current images
# virsh domblklist archey Target Source ------------------------------------------------ vda /vms/archey.img
Notice the image file properties
# qemu-img info /vms/archey.img image: /vms/archey.img file format: qcow2 virtual size: 50G (53687091200 bytes) disk size: 2.1G cluster_size: 65536
Create a disk-only snapshot. The switch
--atomic makes sure that the VM is not modified if snapshot creation fails.
# virsh snapshot-create-as archey snapshot1 --disk-only --atomic
List if you want to see the snapshots
# virsh snapshot-list archey Name Creation Time State ------------------------------------------------------------ snapshot1 2012-10-21 17:12:57 -0700 disk-snapshot
Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".
# qemu-img info /vms/archey.snapshot1 image: /vms/archey.snapshot1 file format: qcow2 virtual size: 50G (53687091200 bytes) disk size: 18M cluster_size: 65536 backing file: /vms/archey.img
At this point, you can go ahead and copy the original image with
cp -sparse=true or
Then you can merge the original image back into the snapshot.
# virsh blockpull --domain archey --path /vms/archey.snapshot1
Now that you have pulled the blocks out of original image, the file
/vms/archey.snapshot1 becomes the new disk image. Check its disk size to see what it means. After that is done, the original image
/vms/archey.img and the snapshot metadata can be deleted safely. The
virsh blockcommit would work opposite to
blockpull but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.
This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.
Poor Man's Networking
Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.
The basic steps are as follows:
- Setup an SSH server in the host OS
- (optional) Create a designated user used for the tunneling (e.g. tunneluser)
- Install SSH in the VM
- Setup authentication
When using the default user network stack, the host is reachable at address 10.0.2.2.
If everything works and you can SSH into the host, simply add something like the following to your
# Local SSH Server echo "Starting SSH tunnel" sudo -u vmuser ssh firstname.lastname@example.org -N -R 2213:127.0.0.1:22 -f # Random remote port (e.g. from another VM) echo "Starting random tunnel" sudo -u vmuser ssh email@example.com -N -L 2345:127.0.0.1:2345 -f
In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.
This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.
Enable nested feature for kvm_intel:
modprobe -r kvm_intel modprobe kvm_intel nested=1
Verify that feature is activated:
# systool -m kvm_intel -v | grep nested nested = "Y"
Create wrapper around qemu-kvm:
# cat /usr/bin/qemu-kvm-nested #!/bin/bash /usr/bin/qemu-system-x86_64 -cpu host "$@" # chmod a+x /usr/bin/qemu-kvm-nested # ls -la /usr/bin/qemu-kvm lrwxrwxrwx 1 root root 18 29 oct. 15:38 /usr/bin/qemu-kvm -> qemu-system-x86_64 # ln -s /usr/bin/qemu-kvm-nested /usr/bin/qemu-kvm # ls -la /usr/bin/qemu-kvm lrwxrwxrwx 1 root root 24 12 nov. 13:09 /usr/bin/qemu-kvm -> /usr/bin/qemu-kvm-nested
Boot VM and check if vmx flag is present:
grep vmx /proc/cpuinfo