Difference between revisions of "KVM"

From ArchWiki
Jump to: navigation, search
(use new package templates, see Help:Style)
(Bridged Networking)
Line 189: Line 189:
== Bridged Networking ==
== Bridged Networking ==
=== Using Netcfg ===
See also [[QEMU#Tap_Networking_with_QEMU]] and [[QEMU#Networking_with_VDE2]].
Bridged networking is used when you want your VM to be on the same network as your host machine.  This will allow it to get a static or DHCP IP address on your network, and then you can access it using that IP address from anywhere on your LAN.  The preferred method for setting up bridged networking for KVM is to use the {{Pkg|netcfg}} package.  You will also need to install {{Pkg|bridge-utils}}.
First, install the {{Pkg|bridge-utils}} package.
Save this script in {{ic|/etc/qemu-ifup}}:
=== Additional notes ===
Other information can be found here: [[QEMU#Tap_Networking_with_QEMU]] and [[QEMU#Networking_with_VDE2]]
if [ ! $1 ]; then
cat >> /dev/stderr << EOF
Usage: qemu-ifup <interface>
        qemu-ifup eth0
exit 1
echo "Executing /etc/qemu-ifup"
echo "Bringing up $1 for bridged mode..."
sudo /usr/sbin/ip link set dev $1 promisc on
echo "Adding $1 to br0..."
sudo /usr/sbin/brctl addif br0 $1
sleep 2
{{Ic|chmod}} the script to 755:
chmod 755 /etc/qemu-ifup
Then use this script to start KVM. Adjust the {{Ic|ARGS}} line to suit your requirements.
ARGS="-hda win2k.img -boot c -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup -m 256 -localtime"
echo "Starting QEMU with..."
echo $ARGS
echo "...."
exec qemu $ARGS
Now the VM should get an IP address from your DHCP server and you can access it using that IP address from your LAN.
If you are using {{Pkg|iptables}}, it is recommended for performance and security reasons to disable the firewall on the bridge:
If you are using {{Pkg|iptables}}, it is recommended for performance and security reasons to disable the firewall on the bridge:

Revision as of 14:09, 3 March 2012

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.

Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어

External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

KVM, Kernel-based Virtual Machine, is a hypervisor built right into the 2.6 (and 3.X) Linux kernel for kernels newer than 2.6.20. It is similar to Xen in purpose but much simpler to get running. To start using the hypervisor, just load the appropriate kvm kernel modules and the hypervisor is up. As with Xen's full virtualization, in order for KVM to work, you must have a processor that supports Intel's VT-x extensions or AMD's AMD-V extensions.

Using KVM, one can run multiple virtual machines running unmodified GNU/Linux, Windows, or any other operating system. (See Guest Support Status). Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. See KVM Howto

Differences among KVM, Xen, VMware, and QEMU can be found at the KVM FAQ.

Get the packages

Arch Linux kernels >= 2.6.22 provide the appropriate kernel modules to support KVM. You can check if your kernel supports KVM with the following command:

modprobe -l 'kvm*'

KVM requires that the virtual machine host's processor has virtualization support. You can check whether your processor supports hardware virtualization with the following command:

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

If nothing is displayed after running that command, then your processor does not support hardware virtualization, and you will not be able to use QEMU-KVM.

KVM also requires a modified QEMU to launch and manage virtual machines. You can choose one of the following according to your needs:

1. The qemu-kvm package is available in the official repositories (recommended).

2. If you also need to use QEMU, you can choose to install qemu >= 0.9.0 instead, which conflicts with the qemu-kvm package. However, qemu now provides a qemu-kvm executable (qemu -enable-kvm) that takes advantage of this technology.

Setup kernel modules

First, you need to add your user account into the kvm group to use the /dev/kvm device.

gpasswd -a <Your_Login_Name> kvm

Secondly, you have to choose one of the following depending on the manufacturer of your CPU.

1. If you have Intel's VT-x extensions, modprobe the kvm and kvm-intel modules.

modprobe kvm
modprobe kvm-intel

2. If you have AMD's AMD-V (code name "Pacifica") extensions, modprobe the kvm and kvm-amd modules.

modprobe kvm
modprobe kvm-amd

If modprobing kvm succeeds, but modprobing kvm-intel or kvm-amd fails (but /proc/cpuinfo claims that hardware acceleration is supported), check your BIOS settings. Some vendors (especially laptop vendors) disable these processor extensions by default. To determine whether there's no hardware support or there is but the extensions are disabled in BIOS, the output from dmesg after having failed to modprobe will tell.

If you want these modules to persist, add them to /etc/rc.conf or /etc/mkinitcpio.conf.

How to use KVM

  1. Create a guest OS image:
    $ qemu-img create -f qcow2 <Image_Name> <size> 
  2. Install the guest OS:
    A CD/DVD image (ISO file) can be used for the installation.
    $ qemu-kvm -hda <Image_Name> -m 512 -cdrom /path/to/the/ISO/image -boot d -vga std 
  3. Running the system:
    $ qemu-kvm -hda <Image_Name> -m 512 -vga std
Note: you may want to assign multiple CPUs to the guest by using -smp X (where X - number of CPUs). The maximum number of assigned CPUs for one guest is 16.
Note: The default amount of main memory assigned to KVM guests is 128 MB. If that is not sufficient, add the -m argument and the desired amount of main memory specified in megabytes (e.g. -m 1024). Also note that recent Windows operating systems (tested with Windows Vista and Windows 7) require the qcow2 disk image format. Other disk image formats may give a 0x80070057 error during the installation.

See QEMU for more information, and the Using the Kernel-based Virtual Machine section.

Paravirtualized guests (virtio)

KVM offers guests the ability to use paravirtualized block and network devices, which leads to better performance and lower overhead. Linux has had this ability with its virtio-modules since kernel 2.6.25. For Windows, a paravirtualized network driver can be obtained here: [1]

A virtio block device requires the option -drive instead of the simple -hd* plus if=virtio:

$ qemu-kvm -drive file=drive.img,if=virtio,boot=on

(ps: boot=on is absolutely required when you want to boot from it. There is no auto-detection as with -hd* ...)

Almost the same goes for the network:

$ qemu-kvm -net nic,model=virtio

Preparing an (Arch) Linux guest

Note: The arch setup scripts for the installer does not handle vd* disk devices correctly and required additional steps as detailed in this post https://bbs.archlinux.org/viewtopic.php?pid=1042283

To use virtio devices after an Arch Linux guest has been installed, the following modules can be loaded in the guest: virtio, virtio_pci, virtio_blk, virtio_net, and virtio_ring (for 32-bit guests, the specific "virtio" module is not necessary).

If you want to boot from a virtio disk, the initial ramdisk must be rebuilt. Add the appropriate modules in /etc/mkinitcpio.conf like this:

MODULES="virtio_blk virtio_pci virtio_net"

and rebuild the initial ramdisk:

# mkinitcpio -p linux

Virtio disks are recognized with the prefix v (e.g. vda, vdb, etc.); therefore, changes must be made in at least /etc/fstab and /boot/grub/menu.lst when booting from a virtio disk. When using grub-pc which references disks by UUID's, nothing has to be done.

Edit or create /boot/grub/device.map:

(hd0) /dev/vda
Note: The following may be outdated since a new official installation ISO has been released (2011.08.19).

To enable virtio at Arch Linux installation time, manual GRUB installation is required (for arch-release-media 2010.05) Though AIF correctly detects the virtio disks and sets up the right prefixes, the /boot/grub/device.map file must be created before configuring the bootloader.

So when installing Arch Linux, you can install GRUB by switching to another virtual terminal (Template:Keypress) and running the following commands.

# grub
> device (hd0) /dev/vda
> root (hd0,0)
> setup (hd0)
> quit
Note: (hd0,0) numbering may change depending on your configuration. Reference: http://lists.mandriva.com/bugs/2009-08/msg03424.php

Once you have installed GRUB, switch back to the main terminal with Template:Keypress.

Further information on paravirtualization with KVM can be found here: [2] section in the German qemu-book: [3]

Preparing a Windows guest

Preparing a Windows guest for running with a virtio disk driver is a bit tricky.

In your KVM host (running Arch Linux), download the virtio disk driver from the Fedora repository.

Now you need to create a new disk image, which fill force Windows to search for the driver. To do it, stop the virtual machine if its running and issue the following command:

qemu-img create -f qcow2 fake.img 1G

Run the original Windows guest (still in the IDE mode). Add the fake disk and a CD-ROM with the driver.

qemu-kvm -drive file=windows.img,if=ide,boot=on -m 512 -drive file=fake.img,if=virtio -cdrom virtio-win-0.1-15.iso -vga std

Windows will detect the fake disk and try to find a driver for it. If it fails, go to Device Manager, locate the SCSI drive with an exclamation mark icon (should be open), click "Update driver" and browse for the proper directory on the virtual CD-ROM.

When the installation is successful, you can turn off the virtual machine and launch it again, now with the virtio driver.

qemu-kvm -drive file=windows.img,if=virtio,boot=on -m 512 -vga std
Note: If you encounter the Blue Screen of Death, make sure you did not forget the -m parameter.

Resizing the image

Warning: resizing an image containing an ntfs boot filesystem, could make the VM installed on it unbootable. One solution (really tricky and for expert users only), is shown here along with a deep explanation of the problem. http://tjworld.net/wiki/Howto/ResizeQemuDiskImages

Up-to-date way

Since version 0.13.0 of qemu, a new option has been added to qemu-img executable, the resize option. By this switch is possible to resize a qcow2 image directly, with no need to pass through raw conversion. I.e., this command will increase my_image.qcow2 image space by 10 Gigabytes

qemu-img resize my_image.qcow2 +10G

Old way

It is possible to increase the size of a qcow2 image later, at least with ext3. Convert it to a raw image, expand its size with dd, convert it back to qcow2, replace the partition with a larger one, do a fsck and resize the filesystem.

$ qemu-img convert -O raw image.qcow2 image.img
$ dd if=/dev/zero of=image.img bs=1G count=0 seek=[NUMBER_OF_GB]
$ qemu-img convert -O qcow2 -o cluster_size=64K image.img imageplus.qcow2
$ qemu-kvm -hda imageplus.qcow2 -m 512 -cdrom </Path/to/the/ISO/Image> -boot d -vga std
$ fdisk /dev/sda  [delete the partition, create new one occupying whole disk]
$ e2fsck -f /dev/sda1
$ resize2fs /dev/sda1

Enabling KSM

Kernel Samepage Merging (KSM) is a feature of the Linux kernel introduced in the 2.6.32 kernel. KSM allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. For KVM, the KSM mechanism allows for guest virtual machines to share pages with each other. In an environment where many of the guest operating systems are similar, this can result in significant memory savings.

To enable KSM, first ensure that you have installed qemu-kvm >= 0.12.0.

# pacman -Qi qemu-kvm | grep Version
Version        : 0.15.0-2

Also ensure that your kernel is at least version 2.6.32.

# uname -r

If this is the case there should be a /sys/kernel/mm/ksm/ directory containing several files. You can turn KSM on or off by echoing a 1 or 0 to /sys/kernel/mm/ksm/run.

# echo 1 > /sys/kernel/mm/ksm/run

If KSM is running, and there are pages to be merged (i.e. more than one similar VM is running), then /sys/kernel/mm/ksm/pages_shared should be non-zero. From the kernel documentation in Documentation/vm/ksm.txt:

The effectiveness of KSM and MADV_MERGEABLE is shown in /sys/kernel/mm/ksm/:

pages_shared     - how many shared unswappable kernel pages KSM is using
pages_sharing    - how many more sites are sharing them i.e. how much saved
pages_unshared   - how many pages unique but repeatedly checked for merging
pages_volatile   - how many pages changing too fast to be placed in a tree
full_scans       - how many times all mergeable areas have been scanned

A high ratio of pages_sharing to pages_shared indicates good sharing, but
a high ratio of pages_unshared to pages_sharing indicates wasted effort.
pages_volatile embraces several different kinds of activity, but a high
proportion there would also indicate poor use of madvise MADV_MERGEABLE.

An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory.

# for ii in /sys/kernel/mm/ksm/* ; do echo -n "$ii: " ; cat $ii ; done
/sys/kernel/mm/ksm/full_scans: 151
/sys/kernel/mm/ksm/max_kernel_pages: 246793
/sys/kernel/mm/ksm/pages_shared: 92112
/sys/kernel/mm/ksm/pages_sharing: 131355
/sys/kernel/mm/ksm/pages_to_scan: 100
/sys/kernel/mm/ksm/pages_unshared: 123942
/sys/kernel/mm/ksm/pages_volatile: 1182
/sys/kernel/mm/ksm/run: 1
/sys/kernel/mm/ksm/sleep_millisecs: 20

Easy to Use for New User

If the qemu package has been installed, you can use a GUI tool, such as qtemu for simple use or qemu-launcher for particle control, to manage your virtual machine.

You need to change qemu in the configure item "QEMU start command" to qemu-kvm or leave the "QEMU start command" as qemu and append -enable-kvm to the additional start options. With newer versions of qemu, it might not be necessary to append -enable-kvm as the qemu executable will detect that KVM is running and start in the correct mode.

If you start your VM with a GUI tool and installation is very slow, you should check for proper KVM support, as QEMU may be falling back to pure software emulation.

Bridged Networking

Using Netcfg

Bridged networking is used when you want your VM to be on the same network as your host machine. This will allow it to get a static or DHCP IP address on your network, and then you can access it using that IP address from anywhere on your LAN. The preferred method for setting up bridged networking for KVM is to use the netcfg package. You will also need to install bridge-utils.


Additional notes

Other information can be found here: QEMU#Tap_Networking_with_QEMU and QEMU#Networking_with_VDE2

If you are using iptables, it is recommended for performance and security reasons to disable the firewall on the bridge:

# cat >> /etc/sysctl.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
# sysctl -p /etc/sysctl.conf

See the libvirt wiki and Fedora bug 512206

Alternatively, you can configure iptables to allow all traffic to be forwarded across the bridge by adding a rule like this:

-I FORWARD -m physdev --physdev-is-bridged -j ACCEPT

Mouse integration

To prevent the mouse from being grabbed when clicking on the guest operating system's windows, add the option -usbdevice tablet. This means QEMU is able to report the mouse position without having to grab the mouse. This also overrides PS/2 mouse emulation when activated.

$ qemu-kvm -hda <Image_Name> -m 512 -vga std -usbdevice tablet

Mounting the QEMU image

modprobe nbd max_part=63
qemu-nbd -c /dev/nbd0 [image.img]
mount /dev/nbd0p1 [/mnt/qemu]

Starting KVM virtual machines on boot up

If you use virt-manager and virsh as your VM tools then this is very simple. At the commandline to set a VM to autostart:

virsh autostart <domain>

To disable autostarting:

virsh autostart --disable <domain>

Virt-manager is equally easy having an autostart check box in the boot options of the VM.

Note VMs started by QEMU or KVM from the command line are not then manageable by virt-manager.

For an alternative check here: QEMU#Starting_QEMU_virtual_machines_on_boot

Tips and tricks

Poor Man's Networking

Setting up bridged networking can be a bit of a hassle sometimes. If the sole purpose of the VM is experimentation, one strategy to connect the host and the guests is to use SSH tunneling.

The basic steps are as follows:

  • Setup an SSH server in the host OS
  • (optional) Create a designated user used for the tunneling (e.g. tunneluser)
  • Install SSH in the VM
  • Setup authentication

See: SSH for the setup of SSH, especially SSH#Forwarding_Other_Ports

When using the default user network stack, the host is reachable at address

If everything works and you can SSH into the host, simply add something like the following to your /etc/rc.local

# Local SSH Server
echo "Starting SSH tunnel"
sudo -u vmuser ssh tunneluser@ -N -R 2213: -f
# Random remote port (e.g. from another VM)
echo "Starting random tunnel"
sudo -u vmuser ssh tunneluser@ -N -L 2345: -f

In this example a tunnel is created to the SSH server of the VM and an arbitrary port of the host is pulled into the VM.

This is a quite basic strategy to do networking with VMs. However, it is very robust and should be quite sufficient most of the time.