From ArchWiki
Revision as of 23:29, 4 December 2013 by Lonaowna (Talk | contribs) (Installing)

Jump to: navigation, search

Template:Article summary start Template:Article summary text Template:Article summary heading Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary wiki Template:Article summary end libvirt is a virtualization API and a daemon for managing virtual machines (VMs) -- remote or locally, using multiple virtualization back-ends (QEMU/KVM, VirtualBox, Xen, etc).


Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: virtinst is not in the official repositories (Discuss in Talk:Libvirt#)

For servers you need the following packages from the official repositories:

For GUI management tools, you also need the following packages:

Building libvirt for Xen

The PKGBUILD for both libvirt-gitAUR in the AUR and libvirt in the official repositories currently disables Xen support with the --without-xen flag during the make process. If you want to use libvirt for managing Xen, you will need to grab the whole file set to enable Xen support and build your own libvirt package using the Arch Build System. Furthermore, you need to make sure you have libxenctrlAUR installed. If xenAUR is installed, you don't need to install libxenctrlAUR.

The alternative XenAPI driver is lacking a package at the moment? (2010-05-23, friesoft)


Libvirt is not usable "out of the box". At a minimum, you must run the daemon and configure permissions, via a PolicyKit authorization or with Unix file permissions. It is also advisable to #Enable KVM acceleration for QEMU.

Run daemon

Change default user and group in /etc/libvirt/qemu.conf. QEMU defaults to nobody:nobody.

Start and enable the libvirtd daemon.

Note: The Avahi daemon is used for local discovery of libvirt hosts via multicast-DNS. To disable this functionality, set mdns_adv = 0 in /etc/libvirt/libvirtd.conf.

PolicyKit authorization

To allow a non-root user in group libvirt to manage virtual machines, you need to create the following file (for polkit >= 0.107 only):

polkit.addRule(function(action, subject) {
    if ( == "org.libvirt.unix.manage" &&
        subject.isInGroup("libvirt")) {
            return polkit.Result.YES;

Alternatively, you can grant only the monitoring rights with org.libvirt.unix.monitor.

For more information, see the libvirt wiki.

Unix file-based permissions

Note: This is an alternative to PolicyKit authentication.

If you wish to use Unix file-based permissions to allow some non-root users to use libvirt, you can modify the configuration files.

First, you will need to create the libvirt group and add any users you want to have access to libvirt to that group.

# groupadd libvirt
# gpasswd -a [username] libvirt

Any users that are currently logged in will need to log out and log back in to update their groups. Alternatively, the user can use the following command in the shell they will be launching libvirt from to update the group:

$ newgrp libvirt

Uncomment the following lines in /etc/libvirt/libvirtd.conf (they are not all in the same location in the file):

 #unix_sock_group = "libvirt"
 #unix_sock_ro_perms = "0777"
 #unix_sock_rw_perms = "0770"
 #auth_unix_ro = "none"
 #auth_unix_rw = "none"
Note: You may also wish to change unix_sock_ro_perms from 0777 to 0770 to disallow read-only access to people who are not members of the libvirt group.

Enable KVM acceleration for QEMU

Note: KVM will conflict with VirtualBox. You cannot use KVM and VirtualBox at the same time.

Running virtual machines with the usual QEMU emulation (i.e. without KVM)), will be painfully slow. You definitely want to enable KVM support if your CPU supports it. To find out, run the following command:

egrep --color "vmx|svm" /proc/cpuinfo

If that command generates output, then your CPU supports hardware acceleration via KVM; if that command does not generate output, then you cannot use KVM.

If KVM is not working, you will find the following message in your /var/log/libvirt/qemu/VIRTNAME.log:

 Could not initialize KVM, will disable KVM support

More info is available from the official KVM FAQ

Stopping / resuming guest at host shutdown / startup

Running guests may be suspended (or shutdown) at host shutdown automatically using the libvirt-guests service. On the other hand, at host startup, this same daemon will resume (startup) the suspended (shutdown) guests automatically. Check /etc/conf.d/libvirtd-guests for libvirt-guests options.

Starting KVM virtual machines on boot up

If you use virt-manager and virsh as your VM tools, then this is very simple. At the command line to set a VM to automatically start at boot-up:

$ virsh autostart <domain>

To disable autostarting:

$ virsh autostart --disable <domain>

Virt-manager is equally easy having an autostart check box in the boot options of the VM.

Note: VMs started by QEMU or KVM from the command line are not then manageable by virt-manager.


Installing a new VM

To create a new VM, you need some sort of installation media, which is usually a standard .iso file. Copy it to the /var/lib/libvirt/images/ directory (alternatively, you can create a new storage pool directory in virt-manager and copy it there).

Note: SELinux requires that virtual machines be stored in /var/lib/libvirt/images/ by default. If you use SELinux and are having issues with virtual machines, ensure that your VMs are in that directory or ensure that you have added the correct labeling to the non-default directory that you used.

Then run virt-manager, connect to the server, right click on the connection and choose New. Choose a name, and select Local install media. Just continue with the wizard.

On the 4th step, you may want to uncheck Allocate entire disk now -- this way you will save space when your VM is not using all of its disk. However, this can cause increased fragmentation of the disk, and you must pay attention to the total available disk space on the VM host because it is much easier to over-allocate disk space to VMs.

On the 5th step, open Advanced options and make sure that Virt Type is set to kvm. If the kvm choice is not available, see section Enable KVM acceleration for QEMU above.

Creating a storage pool in virt-manager

First, connect to an existing server. Once you are there, right click and choose Details. Go to Storage and press the + icon at the lower left. Then just follow the wizard. :)

Using VirtualBox with virt-manager

Note: VirtualBox support in libvirt is not quite stable yet and may cause your libvirtd to crash. Usually this is harmless and everything will be back once you restart the daemon.

virt-manager does not let you to add any VirtualBox connections from the GUI. However, you can launch it from the command line:

virt-manager -c vbox:///system

Or if you want to manage a remote system over SSH:

virt-manager -c vbox+ssh://username@host/system

Live snapshots

A feature called external snapshotting allows one to take a live snapshot of a virtual machine without turning it off. Currently it only works with qcow2 and raw file based images.

Once a snapshot is created, KVM attaches that new snapshotted image to virtual machine that is used as its new block device, storing any new data directly to it while the original disk image is taken offline which you can easily copy or backup. After that you can merge the snapshotted image to the original image, again without shutting down your virtual machine.

Here's how it works.

Currently running virtual machines:

# virsh list --all
 Id    Name                           State
 3     archey                            running

List all its current images:

# virsh domblklist archey
 Target     Source
 vda        /vms/archey.img

Notice the image file properties

# qemu-img info /vms/archey.img
 image: /vms/archey.img
 file format: qcow2
 virtual size: 50G (53687091200 bytes)
 disk size: 2.1G
 cluster_size: 65536

Create a disk-only snapshot. The switch --atomic makes sure that the VM is not modified if snapshot creation fails.

# virsh snapshot-create-as archey snapshot1 --disk-only --atomic

List if you want to see the snapshots:

# virsh snapshot-list archey
 Name                 Creation Time             State
 snapshot1           2012-10-21 17:12:57 -0700 disk-snapshot

Notice the new snapshot image created by virsh and its image properties. It weighs just a few MiBs and is linked to its original "backing image/chain".

# qemu-img info /vms/archey.snapshot1
 image: /vms/archey.snapshot1
 file format: qcow2
 virtual size: 50G (53687091200 bytes)
 disk size: 18M
 cluster_size: 65536
 backing file: /vms/archey.img

At this point, you can go ahead and copy the original image with cp -sparse=true or rsync -S. Then you can merge the original image back into the snapshot.

# virsh blockpull --domain archey --path /vms/archey.snapshot1

Now that you have pulled the blocks out of original image, the file /vms/archey.snapshot1 becomes the new disk image. Check its disk size to see what it means. After that is done, the original image /vms/archey.img and the snapshot metadata can be deleted safely. The virsh blockcommit would work opposite to blockpull but it seems to be currently under development in qemu-kvm 1.3 (including snapshot-revert feature), scheduled to be released sometime next year.

This new feature of KVM will certainly come handy to the people who like to take frequent live backups without risking corruption of the file system.

Remote access to libvirt

Using unencrypted TCP/IP socket (most simple, least secure)

Warning: This should only be used for testing or use over a secure, private, and trusted network.

Edit /etc/libvirt/libvirtd.conf:

listen_tls = 0
listen_tcp = 1
Warning: We do not enable SASL here, so all TCP traffic is cleartext! For real world use, always enable SASL.

It is also necessary to start the server in listening mode by editing /etc/conf.d/libvirtd


Using SSH

The openbsd-netcat package is needed for remote management over SSH.

To connect to the remote system using virsh:

$ virsh -c qemu+ssh://username@host/IP address/system

If something goes wrong, you can get some logs using:

$ LIBVIRT_DEBUG=1 virsh -c qemu+ssh://username@host/IP address/system

To display the graphical console for a virtual machine:

$ virt-viewer --connect qemu+ssh://username@host/IP address/system myvirtualmachine

To display the virtual machine desktop management tool:

$ virt-manager -c qemu+ssh://username@host/IP address/system
Note: If you are having problems connecting to a remote RHEL server (or anything other than Arch, really), try the two workarounds mentioned in FS#30748 and FS#22068.

Using Python

The libvirt package comes with a python2 API in /usr/lib/python2.7/site-packages/

General examples are given in /usr/share/doc/libvirt-python-your_libvirt_version/examples/

Unofficial example using qemu and openssh:

#! /usr/bin/env python2
# -*- coding: utf-8 -*-
import socket
import sys
import libvirt
if (__name__ == "__main__"):
   conn ="qemu+ssh://xxx/system")
   print "Trying to find node on xxx"
   domains = conn.listDomainsID()
   for domainID in domains:
       domConnect = conn.lookupByID(domainID)
       if == 'xxx-node':
           print "Found shared node on xxx with ID " + str(domainID)
           domServ = domConnect

Bridged Networking

To use physical Ethernet from your virtual machines, you have to create a bridge between your physical Ethernet device (here eth0) and the virtual Ethernet device the VM is using.

Host configuration

libvirt creates the bridge virbr0 for NAT networking, so use another name such as br0 or virbr1. You have to create a new Netctl Profile to configure the bridge, for example (with DHCP configuration):

Description="Bridge connection for kvm"

Tango-view-refresh-red.pngThis article or section is out of date.Tango-view-refresh-red.png

Reason: The tip below needs to be updated for netctl. (Discuss in Talk:Libvirt#)
Tip: It is recommended that you enable Spanning Tree Protocol (STP) on the virtual bridge (e.g. br0) that you create to avoid any potential bridging loops. You can automatically enable STP by appending POST_UP="brctl stp $INTERFACE on" to the netcfg profile.

Guest configuration

Now we have to activate the bridge interface in our VMs. If have a recent Linux machine, you can use this code in the .xml file:

 <interface type='bridge'>
   <source bridge='br0'/>
   <mac address='24:42:53:21:52:49'/>
   <model type='virtio' />

This code activates a virtio device on the machine so, in Windows you will have to install an additional driver (you can find it here Windows KVM VirtIO drivers) or remove the line <model type='virtio' />:

 <interface type='bridge'>
   <source bridge='br0'/>
   <mac address='24:42:53:21:52:49'/>