Difference between revisions of "KVM"

From ArchWiki
Jump to: navigation, search
(Setup kernel modules)
m (Preparing an (arch)linux guest: Spelling)
Line 77: Line 77:
To use virtio devices, the following modules can be loaded in the guest: '''virtio, virtio_pci, virtio_blk, virtio_net and virtio_ring''' ( for 32Bit guests, the specific "virtio" module isn't necessary).
To use virtio devices, the following modules can be loaded in the guest: '''virtio, virtio_pci, virtio_blk, virtio_net and virtio_ring''' ( for 32Bit guests, the specific "virtio" module isn't necessary).
If you want to boot from a virtio-disk, the initial ramdisk must be [[mkinitcpio|rebuild]]. Add the appropriate modules in /etc/mkinitcpio.conf like this:
If you want to boot from a virtio-disk, the initial ramdisk must be [[mkinitcpio|rebuilt]]. Add the appropriate modules in /etc/mkinitcpio.conf like this:
  MODULES="virtio virtio_blk virtio_pci"
  MODULES="virtio virtio_blk virtio_pci"
and build:
and build:

Revision as of 05:35, 10 July 2011

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.

Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어

External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

KVM, Kernel-based Virtual Machine, is a hypervisor built right into the 2.6 kernel for kernels later than 2.6.20. It is similar to Xen in purpose but much simpler to get running. To start using the hypervisor, just load the appropriate kvm modules and the hypervisor is up. As with Xen's full virtualization, in order for KVM to work, you must have a processor that supports Intel's VT extensions or AMD's Pacifica extensions.

Using KVM, one can run multiple virtual machines running unmodified Linux, Windows or any other system images. (See Guest Support Status) Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc. See KVM Howto

Differences among KVM, Xen, VMware, and QEMU can be found at KVM FAQ.

Get the packages

Arch Kernel 2.6.22 or newer now provide appropriate kvm modules. You could check if your kernel support kvm with the following command :

modprobe -l 'kvm*'

KVM also requires a modified QEMU to launch and manage virtual machines.You can choose one of the following as you like:

1, the qemu-kvm package in the EXTRA repository, providing the qemu-kvm

pacman -S kernel26 qemu-kvm

2, if you also need to use qemu, you can choose to install qemu >= 0.9.0, which conflicts with the qemu-kvm packge, and now provides a qemu-kvm executable (qemu -enable-kvm) that take advantage of this technology.

pacman -S kernel26 qemu

Setup kernel modules

You could check if your computer support hardware acceleration with this command (must return something on screen) :

egrep '^flags.*(vmx|svm)' /proc/cpuinfo

Firstly, you need to add your user into the kvm group to use the /dev/kvm device.

gpasswd -a <Your_Login_Name> kvm

Secondly, you have to choose one according to the manufacturer of your cpu.

1,modprobe kvm and kvm-intel modules if you have Intel extensions.

modprobe kvm
modprobe kvm-intel

2,Or modprobe kvm and kvm-amd modules if you have AMD extensions.

modprobe kvm
modprobe kvm-amd

If modprobing kvm succedes, but modprobing kvm-intel or kvm-amd fails (but /proc/cpuinfo claims that VT is supported), check your bios settings. Some vendors (especially laptop vendors) disable VT by default.

If you want these modules to persist, add them to rc.conf

How to use KVM

  1. Create a guest OS image
    $ qemu-img create -f qcow2 <Image_Name> <size>
  2. Install the guest OS
    A CD/DVD image (ISO files) can be used for the installation.
    $ qemu-kvm -hda <Image_Name> -m 512 -cdrom </Path/to/the/ISO/Image> -boot d -vga std
  3. Running the system
    $ qemu-kvm -hda <Image_Name> -m 512 -vga std

Note: The default memory of KVM is 128M, if not provide "-m". Also note that recent Windows operating systems (tested with Vista and Windows 7) require the qcow2 filesystem. Other filesystems gave me a 0x80070057 error during the installation.

See QEMU for all informations, and the Using the Kernel-based Virtual Machine section.

Paravirtualized guests (virtio)

KVM offers guests to use paravirtualized block- and network devices, which leads to a better performance and less overhead. Linux has this ability with its virtio-modules since kernel 2.6.25. For Windows, a paravirtualized network driver can be obtained here: [1]

A virtio block device requires the option -drive instead of the simple -hd* plus if=virtio:

$ qemu-kvm -drive file=drive.img,if=virtio,boot=on

(ps: boot=on is absolutely required when you want to boot from it. There is no auto-detection as with -hd* ...)

Almost the same goes for the network:

$ qemu-kvm -net nic,model=virtio

Preparing an (arch)linux guest

To use virtio devices, the following modules can be loaded in the guest: virtio, virtio_pci, virtio_blk, virtio_net and virtio_ring ( for 32Bit guests, the specific "virtio" module isn't necessary). If you want to boot from a virtio-disk, the initial ramdisk must be rebuilt. Add the appropriate modules in /etc/mkinitcpio.conf like this:

MODULES="virtio virtio_blk virtio_pci"

and build:

# mkinitcpio -p kernel26

virtio disks are recognized with the prefix v (like vda etc). Therefore changes have to be made in at least /etc/fstab and /boot/grub/menu.lst (when booting from a virtio disk). Off course, when using grub-pc which references disks by uuids, nothing has to be done.

Grub has problems detecting virtio disks. So if the bootloader is not installed or you want to reinstall it, also the file /boot/grub/device.map has to be changed (or created, when not present as in most cases) accordingly:

(hd0) /dev/vda

Now run grub with the option --device-map

# grub --device-map /boot/grub/device.map

In the interactive shell define the boot-partition; here vda1

> root (hd0,0)

And install the bootloader; here on vda

> setup (hd0)

If it was successful leave the shell

> quit

Unfortunately that manual grub-installation is required at the archlinux-installation (current arch-release-media 2010.05.) Though Aif correctly detects the virtio disks and sets up the right prefixes, the device.map must be created before setting up the bootloader.

So when installing Arch Linux you can install grub by switching to another virtual terminal (Ctrl+Alt+F2) and running the following commands.

# grub
> device (hd0) /dev/vda
> root (hd0,0)
> setup (hd0)
> quit

Note: (hd0,0) numbering may change depending on your configuration. Reference: http://lists.mandriva.com/bugs/2009-08/msg03424.php

Once you have installed grub switch back to the main terminal with Ctrl+Alt+F1.

Further information on paravirtualization with kvm: [2] section in the german qemu-book: [3]

Resizing the image

It is possible to increase the size of a qcow2 image later, at least with ext3. Convert it to a raw image, expand its size with dd, convert it back to qcow2, replace the partition with a larger one, do a fsck and resize the filesystem.

$ qemu-img convert -O raw image.qcow2 image.img
$ dd if=/dev/zero of=image.img bs=1G count=0 seek=[NUMBER_OF_GB]
$ qemu-img convert -O qcow2 -o cluster_size=64K image.img imageplus.qcow2
$ qemu-kvm -hda imageplus.qcow2 -m 512 -cdrom </Path/to/the/ISO/Image> -boot d -vga std
$ fdisk /dev/sda  [delete the partition, create new one occupying whole disk]
$ e2fsck -f /dev/sda1
$ resize2fs /dev/sda1

Enabling KSM

KSM (Kernel Samepage Merging) is a feature of the Linux kernel introduced in kernel 2.6.32. KSM allows for an application to register with the kernel to have its pages merged with other processes that also register to have their pages merged. For KVM, the KSM mechanism allows for guest virtual machines to share pages with eachother. In an environment where many of the guest operating systems are similar, this can result in significant memory savings. To enable KSM, first ensure that you have a version of qemu-kvm installed which is at least 0.12.

# pacman -Qi qemu-kvm | grep Version
Version        :

Also ensure that your kernel is at least 2.6.32.

# uname -r

If this is the case there should be a /sys/kernel/mm/ksm/ directory. Containing several files. You can turn KSM on or off by echoing a 1 or 0 to /sys/kernel/mm/ksm/run.

# echo 1 > /sys/kernel/mm/ksm/run

If KSM is running, and there are pages to be merged (ie, more than 1 similar VM is running) then /sys/kernel/mm/ksm/pages_shared, it should be non-zero. From the kernel documentation in Documentation/vm/ksm.txt

The effectiveness of KSM and MADV_MERGEABLE is shown in /sys/kernel/mm/ksm/:

pages_shared     - how many shared unswappable kernel pages KSM is using
pages_sharing    - how many more sites are sharing them i.e. how much saved
pages_unshared   - how many pages unique but repeatedly checked for merging
pages_volatile   - how many pages changing too fast to be placed in a tree
full_scans       - how many times all mergeable areas have been scanned

A high ratio of pages_sharing to pages_shared indicates good sharing, but
a high ratio of pages_unshared to pages_sharing indicates wasted effort.
pages_volatile embraces several different kinds of activity, but a high
proportion there would also indicate poor use of madvise MADV_MERGEABLE.

An easy way to see how well KSM is performing is to simply print the contents of all the files in that directory.

# for ii in /sys/kernel/mm/ksm/* ; do echo -n "$ii: " ; cat $ii ; done
/sys/kernel/mm/ksm/full_scans: 151
/sys/kernel/mm/ksm/max_kernel_pages: 246793
/sys/kernel/mm/ksm/pages_shared: 92112
/sys/kernel/mm/ksm/pages_sharing: 131355
/sys/kernel/mm/ksm/pages_to_scan: 100
/sys/kernel/mm/ksm/pages_unshared: 123942
/sys/kernel/mm/ksm/pages_volatile: 1182
/sys/kernel/mm/ksm/run: 1
/sys/kernel/mm/ksm/sleep_millisecs: 20

Easy to Use for New User

If the qemu package has been installed, you can use an GUI tool, such as qtemu for simple use or qemu-launcher for particle control, to manage your virtual machine.

you need to change "qemu" in the configure item "QEMU start command" to "qemu-kvm" or "qemu",and append -enable-kvm to the additional start options. With newer versions of qemu it might not be neccessary to use the -enable-kvm as the qemu will detect that KVM is running and start in the corresponding mode.

If you start your VM with a GUI tool and installation is very slow you should check for correct kvm support.

Bridged Networking

See also QEMU#Tap_Networking_with_QEMU and QEMU#Networking_with_VDE2.

pacman -S bridge-utils

save this script in /etc/qemu-ifup

echo "Executing /etc/qemu-ifup"
echo "Bringing up $1 for bridged mode..."
sudo /sbin/ifconfig $1 promisc up
echo "Adding $1 to br0..."
sudo /usr/sbin/brctl addif br0 $1
sleep 2

chmod 755 the script

chmod 755 /etc/qemu-ifup

Then use this script to start kvm, adjust the ARGS line to your needs.

ARGS="-hda win2k.img -boot c -net nic,vlan=0 -net tap,vlan=0,ifname=tap0,script=/etc/qemu-ifup -m 256 -localtime"
echo "Starting QEMU with..."
echo $ARGS
echo "...."
exec qemu $ARGS

Now the VM should get a ip from your dhcp server and you can acces it through that ip in your LAN.

Mouse integration

For prevent mouse to be grabbed when click on the guest operating system windows add the option "-usbdevice tablet". This means qemu is able to report the mouse position without having to grab the mouse. Also overrides the PS/2 mouse emulation when activated.

$ qemu-kvm -hda <Image_Name> -m 512 -vga std -usbdevice tablet

Mounting the qemu image

modprobe nbd max_part=63
qemu-nbd -c /dev/nbd0 [image.img]
mount /dev/nbd0p1 [/mnt/qemu]

Starting kvm virtual machines on boot

Here: QEMU#Starting_qemu_virtual_machines_on_boot