Difference between revisions of "QEMU (简体中文)"

From ArchWiki
Jump to: navigation, search
(优化 Windows 9x 核芯使用率)
(Update package name.)
Line 1: Line 1:
[[Category:简体中文]]
 
 
[[Category:Emulators (简体中文)]]
 
[[Category:Emulators (简体中文)]]
 
[[de:Qemu]]
 
[[de:Qemu]]
Line 13: Line 12:
 
:当作为一个虚拟机时,qemu可以通过直接使用真机的系统资源,让虚拟系统能够获得接近于真机的性能表现。qemu支持xen或者kvm模式下的虚拟化。当用kvm时,qemu可以虚拟x86、服务器和嵌入式powerpc,以及s390的系统。
 
:当作为一个虚拟机时,qemu可以通过直接使用真机的系统资源,让虚拟系统能够获得接近于真机的性能表现。qemu支持xen或者kvm模式下的虚拟化。当用kvm时,qemu可以虚拟x86、服务器和嵌入式powerpc,以及s390的系统。
  
== qemu 和 qemu-kvm 的区别==
+
== 安装 ==
根据需要,可以从[[官方源]]选择[[安装]] {{Pkg|qemu}} 或者 {{Pkg|qemu-kvm}}。
+
根据需要,可以从[[官方源]]选择[[安装]] {{Pkg|qemu}}.
  
 
KVM (Kernel Virtual Machine 内核虚拟机)是linux内建的一个功能模块,可让虚拟机中的用户空间程序利用主机上不同处理器提供的硬件虚拟化功能。现已支持英特尔和AMD的处理器(x86 和 x86_64), PPC 440, PPC 970, 和S/390处理器。
 
KVM (Kernel Virtual Machine 内核虚拟机)是linux内建的一个功能模块,可让虚拟机中的用户空间程序利用主机上不同处理器提供的硬件虚拟化功能。现已支持英特尔和AMD的处理器(x86 和 x86_64), PPC 440, PPC 970, 和S/390处理器。
Line 21: Line 20:
  
 
不是所有的处理器都支持 KVM。你需要一个基于 x86 的机器,运行一个较新的 ( >= 2.6.22 ) Linux 内核,使用带有VT-x (virtualization technology 虚拟化技术)的 Intel 处理器或者带有 SVM (Secure Virtual Machine 安全虚拟机) 扩展 (也叫做 AMD-V) 的 AMD 处理器。Xen 有一个 [http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors (过期的) 兼容处理器列表]。对于 Intel 处理器,另见 [http://ark.intel.com/VTList.aspx Intel® Virtualization Technology 列表].
 
不是所有的处理器都支持 KVM。你需要一个基于 x86 的机器,运行一个较新的 ( >= 2.6.22 ) Linux 内核,使用带有VT-x (virtualization technology 虚拟化技术)的 Intel 处理器或者带有 SVM (Secure Virtual Machine 安全虚拟机) 扩展 (也叫做 AMD-V) 的 AMD 处理器。Xen 有一个 [http://wiki.xensource.com/xenwiki/HVM_Compatible_Processors (过期的) 兼容处理器列表]。对于 Intel 处理器,另见 [http://ark.intel.com/VTList.aspx Intel® Virtualization Technology 列表].
 
== 安装 QEMU ==
 
 
下面二选一,2包括在1里面:
 
 
1、要模拟其他处理器,可选择安装QEMU,[[Pacman (简体中文)|安装]] {{pkg|qemu}} 包。
 
 
2、如果不需要模拟其他处理器,可以只安装KVM,[[Pacman (简体中文)|安装]] {{pkg|qemu-kvm}} 包。
 
  
 
== 创建硬盘镜像 ==
 
== 创建硬盘镜像 ==
Line 181: Line 172:
 
This technology requires an x86 machine running a recent ( >= 2.6.22) Linux kernel on an Intel processor with VT-x (virtualization technology) extensions, or an AMD processor with SVM (Secure Virtual Machine) extensions. It is included in the mainline Linux kernel since 2.6.20 and is enabled by default in the Arch Linux kernel.
 
This technology requires an x86 machine running a recent ( >= 2.6.22) Linux kernel on an Intel processor with VT-x (virtualization technology) extensions, or an AMD processor with SVM (Secure Virtual Machine) extensions. It is included in the mainline Linux kernel since 2.6.20 and is enabled by default in the Arch Linux kernel.
  
Even though QEMU in recent versions ( < 0.15.0) does have initial KVM support ({{ic|qemu --enable-kvm}}), it is not recommended to use this, as many KVM-related functions still have not been implemented in upstream QEMU. Instead, you should go for the {{Pkg|qemu-kvm}} package in the [[Official Repositories|official repositories]], which is released by the KVM development team and contains all of the latest features (and bugfixes) of KVM userspace. Please refer to the [[KVM]] page itself, for more information on using QEMU with KVM on Arch Linux.
+
Please refer to the [[KVM]] page itself, for more information on using QEMU with KVM on Arch Linux.
 
+
{{Note|{{Pkg|qemu}} ><nowiki>=</nowiki> 0.15.0 has full support for KVM, as the qemu-kvm tree has been completely merged into the upstream qemu tree. There should be no difference between {{ic|qemu-system-x86}} and {{ic|qemu-kvm}} if your version of {{Pkg|qemu}} is ><nowiki>=</nowiki> 0.15.0.}}
+
  
 
To take advantage of KVM, you simply need a compatible processor (the following command must return something on the screen):
 
To take advantage of KVM, you simply need a compatible processor (the following command must return something on the screen):
Line 233: Line 222:
 
That is all! Now, you just have to run KVM with these {{ic|-net}} options as a normal user:
 
That is all! Now, you just have to run KVM with these {{ic|-net}} options as a normal user:
  
  $ qemu-kvm -net nic -net vde -hda ...
+
  $ qemu -net nic -net vde -hda ...
  
 
Configure your guest as you would do in a physical network. I gave them static addresses and let them access the WAN using IP forwarding and masquerading on my host:
 
Configure your guest as you would do in a physical network. I gave them static addresses and let them access the WAN using IP forwarding and masquerading on my host:
Line 353: Line 342:
 
  IFACE=`sudo tunctl -b -u $USERID`
 
  IFACE=`sudo tunctl -b -u $USERID`
 
   
 
   
  qemu-kvm -net nic -net tap,ifname="$IFACE" $*
+
  qemu -net nic -net tap,ifname="$IFACE" $*
 
   
 
   
 
  sudo tunctl -d $IFACE &> /dev/null
 
  sudo tunctl -d $IFACE &> /dev/null

Revision as of 05:13, 7 March 2013

Tango-preferences-desktop-locale.png本页面需要更新翻译,内容可能已经与英文脱节。要贡献翻译,请访问简体中文翻译组Tango-preferences-desktop-locale.png

附注: please use the first argument of the template to provide more detailed indications.

来自 QEMU 关于页面,

Qemu是一个广泛使用的开源计算机仿真器和虚拟机。
当作为仿真器时,可以在一种架构(如PC机)下运行另一种架构(如ARM)下的操作系统和程序。通过动态转化,可以获得很高的运行效率。
当作为一个虚拟机时,qemu可以通过直接使用真机的系统资源,让虚拟系统能够获得接近于真机的性能表现。qemu支持xen或者kvm模式下的虚拟化。当用kvm时,qemu可以虚拟x86、服务器和嵌入式powerpc,以及s390的系统。

安装

根据需要,可以从官方源选择安装 qemu.

KVM (Kernel Virtual Machine 内核虚拟机)是linux内建的一个功能模块,可让虚拟机中的用户空间程序利用主机上不同处理器提供的硬件虚拟化功能。现已支持英特尔和AMD的处理器(x86 和 x86_64), PPC 440, PPC 970, 和S/390处理器。

QEMU 当运行与主机架构相同的目标架构时可以使用 KVM。例如,当在一个x86兼容处理器上运行 qemu-system-x86 时,可以利用 KVM 加速——可以提供你的主机和客户机更好的性能。

不是所有的处理器都支持 KVM。你需要一个基于 x86 的机器,运行一个较新的 ( >= 2.6.22 ) Linux 内核,使用带有VT-x (virtualization technology 虚拟化技术)的 Intel 处理器或者带有 SVM (Secure Virtual Machine 安全虚拟机) 扩展 (也叫做 AMD-V) 的 AMD 处理器。Xen 有一个 (过期的) 兼容处理器列表。对于 Intel 处理器,另见 Intel® Virtualization Technology 列表.

创建硬盘镜像

要运行qemu你会需要硬盘镜像. 这是一个保存模拟硬盘内容的特殊文件.

使用命令:

qemu-img create -f qcow2 win.qcow 4G

来创建名为"win.qcow"的镜像文件. The "4G" parameter specifies the size of the disk - in this case 4 GB. You can use suffix M for megabytes (for example "256M"). You shouldn't worry too much about the size of the disk - the qcow2 format compresses the image so that the empty space doesn't add up to the size of the file.

准备安装介质

The installation CD-ROM/floppy shouldn't be mounted, because Qemu accesses the media directly. It is a good idea to dump CD-ROM and/or floppy to a file, because it both improves performance and doesn't require you to have direct access to the devices (that is, you can run Qemu as a regular user). For example, if the CD-ROM device node is named "/dev/cdrom", you can dump it to a file with the command:

dd if=/dev/cdrom of=win98icd.iso

Do the same for floppies:

dd if=/dev/fd of=win95d1.img
...

When you need to replace floppies within qemu, just copy the contents of one floppy over another. For this reason, it is useful to create a special file that will hold the current floppy:

touch floppy.img

选择Windows版本

Qemu能够运行任何版本的Windows,不过默认的98, Me and XP运行起来很慢,而 Windows 95 和 Windows 2000都很快,尤其是2000(跑得甚至比98快),最快的是95,它的速度已经可以让你忘记自己是在使用虚拟机。:)

如果你同时拥有Win95和Win98/WinME,推荐使用98lite(来自 http://www.litepc.com )。因为此版本不再使用默认的ie,而是替之用Win95的Explorer。并且此版本还能最小化安装Windows(把你通常不想要的都可以不要了),这样可以获得一个最小、最快、并且还稳定的Windows系统,看起来不错哦。

安装操作系统

This is the first time you will need to start the emulator. One thing to keep in mind: when you click inside the QEMU window, the mouse pointer is grabbed. To release it press Template:Keypress + Template:Keypress.

If you need to use a bootable floppy, run QEMU with:

qemu -cdrom [[cdrom''image]] -fda [[floppy''image]] -boot a [[hd_image]]

or if you are on a x86_64 system (will avoid many problems afterwards):

qemu-system-x86_64 -cdrom [[cdrom''image]] -fda [[floppy''image]] -boot a [[hd_image]]

If your CD-ROM is bootable or you are using ISO files, run QEMU with:

qemu -cdrom [[cdrom''image]] -boot d [[hd''image]]

or if you are on a x86_64 system (will avoid many problems afterwards):

qemu-system-x86_64 -cdrom [[cdrom''image]] -boot d [[hd''image]]

Now partition the virtual hard disk, format the partitions and install the OS.

A few hints:

  1. If you are using Windows 95 boot floppy, then choosing SAMSUNG as the type of CD-ROM seems to work
  2. There are problems when installing Windows 2000. Windows setup will generate a lot of edb*.log files, one after the other containing nothing but blank spaces in C:\WINNT\SECURITY which quickly fill the virtual hard disk. A workaround is to open a Windows command prompt as early as possible during setup (by pressing Template:Keypress) which will allow you to remove these log files as they appear by typing:
del %windir%\security\*.log
Note: According to the official QEMU website, "Windows 2000 has a bug which gives a disk full problem during its installation. When installing it, use the -win2k-hack QEMU option to enable a specific workaround. After Windows 2000 is installed, you no longer need this option (this option slows down the IDE transfers)."

运行系统

To run the system simply type:

qemu [hd_image]

A good idea is to use overlay images. This way you can create hard disk image once and tell QEMU to store changes in external file. You get rid of all the instability, because it is so easy to revert to previous system state :)

To create an overlay image, type:

qemu-img create -b [[base''image]] -f qcow2 [[overlay''image]]

Substitute the hard disk image for base_image (in our case win.qcow). After that you can run qemu with:

qemu [overlay_image]

or if you are on a x86_64 system:

qemu-system-x86_64 [overlay_image]

and the original image will be left untouched. One hitch, the base image cannot be renamed or moved, the overlay remembers the base's full path.

宿主机和虚拟机数据交互

If you have servers on your host OS they will be accessible with the ip-address 10.0.2.2 without any further configuration. So you could just FTP or SSH, etc to 10.0.2.2 from windows to share data, or if you would like to use Samba:

Samba

QEMU supports Samba which allows you to mount host directories during the emulation. It seems that there is an incompatibility with Samba 3.x. and some versions of QEMU. But at least with a current snapshot of QEMU it should be working.

First, you need to have a working Samba installation. Then add the following section to your smb.conf:

[qemu]
   comment = Temporary file space
   path = /tmp
   read only = no
   public = yes

Now start QEMU with:

qemu [hd_image] -smb qemu

Then you should be able to access your host's Samba server with the IP address 10.0.2.2. If you are running Win9x as a guest OS, you may need to add

10.0.2.2 smbserver

to C:\Windows\lmhosts (Win9x has Lmhosts.sam as a SAMple, rename it!).

挂载虚拟硬盘镜像

Fortunately there is a way to mount the hard disk image with a loopback device. Login as root, make a temporary directory and mount the image with the command:

mount -o loop,offset=32256 [[hd''image]] [[tmp''dir]]

Now you can copy data in both directions. When you are done, umount with:

umount [hd_image]

The drawback of this solution is that you cannot use it with qcow images (including overlay images). So you need to create you images without \"-f qcow\" option.

小贴士: Create a second, raw hard drive image. This way you will be able to transfer data easily and use qcow overlay images for the primary drive.
警告: 挂载时千万不要启动虚拟机!

挂载 qcow2 镜像

你可以使用 qemu-nbd 挂载 qcow2 镜像. 参见 Wikipedia:Qcow#Mounting_qcow2_images.

使用物理分区作为硬盘镜像中的唯一主分区

Sometimes, you may wish to use one of your system partition from within QEMU (for instance, if you wish booting both your real machine or QEMU using a given partition as root). You can do this using software RAID in linear mode (you need the linear.ko kernel driver) and a loopback device: the trick is to dynamically prepend a master boot record (MBR) to the real partition you wish to embed in a QEMU raw disk image.

Suppose you have a plain, unmounted /dev/hdaN partition with some filesystem on it you wish to make part of a QEMU disk image. First, you create some small file to hold the MBR:

dd if=/dev/zero of=/path/to/mbr count=32

Here, a 16 KB (32 * 512 bytes) file is created. It is important not to make it too small (even if the MBR only needs a single 512 bytes block), since the smaller it will be, the smaller the chunk size of the software RAID device will have to be, which could have an impact on performance. Then, you setup a loopback device to the MBR file:

losetup -f /path/to/mbr

Let's assume the resulting device is /dev/loop0, because we would not already have been using other loopbacks. Next step is to create the "merged" MBR + /dev/hdaN disk image using software RAID:

 modprobe linear
 mdadm --build --verbose /dev/md0 --chunk=16 --level=linear --raid-devices=2 /dev/loop0 /dev/hdaN

The resulting /dev/md0 is what you will use as a QEMU raw disk image (do not forget to set the permissions so that the emulator can access it). The last (and somewhat tricky) step is to set the disk configuration (disk geometry and partitions table) so that the primary partition start point in the MBR matches the one of /dev/hdaN inside /dev/md0 (an offset of exactly 16 * 512 = 16384 bytes in this example). Do this using fdisk on the host machine, not in the emulator: the default raw disc detection routine from QEMU often results in non kilobyte-roundable offsets (such as 31.5 KB, as in the previous section) that cannot be managed by the software RAID code. Hence, from the the host:

 fdisk /dev/md0

Press Template:Keypress to enter the expert menu. Set number of 's'ectors per track so that the size of one cylinder matches the size of your MBR file. For two heads and a sector size of 512, the number of sectors per track should be 16, so we get cylinders of size 2x16x512=16k.

Now, press Template:Keypress to return to the main menu.

Press Template:Keypress and check that the cylinder size is now 16k.

Now, create a single primary partition corresponding to /dev/hdaN. It should start at cylinder 2 and end at the end of the disk (note that the number of cylinders now differs from what it was when you entered fdisk.

Finally, 'w'rite the result to the file: you are done. You know have a partition you can mount directly from your host, as well as part of a QEMU disk image:

 qemu -hdc /dev/md0 [...]

You can of course safely set any bootloader on this disk image using QEMU, provided the original /dev/hdaN partition contains the necessary tools.

使用基于内核的虚拟机(KVM)

KVM is a full virtualization solution for Linux on x86 hardware containing virtualization extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-amd.ko. Using KVM, one can run multiple virtual machines running unmodified Linux or Windows images. Each virtual machine has private virtualized hardware: a network card, disk, graphics adapter, etc.

This technology requires an x86 machine running a recent ( >= 2.6.22) Linux kernel on an Intel processor with VT-x (virtualization technology) extensions, or an AMD processor with SVM (Secure Virtual Machine) extensions. It is included in the mainline Linux kernel since 2.6.20 and is enabled by default in the Arch Linux kernel.

Please refer to the KVM page itself, for more information on using QEMU with KVM on Arch Linux.

To take advantage of KVM, you simply need a compatible processor (the following command must return something on the screen):

grep -E "(vmx|svm)" --color=always /proc/cpuinfo

And load the appropriate module from your /etc/rc.conf.

  • For Intel® processors add kvm-intel to your MODULES array in /etc/rc.conf
  • for AMD® processors add kvm-amd to your MODULES array in /etc/rc.conf

Also, you will need to add yourself to the group kvm.

gpasswd -a <Your_User_Account> kvm

网络

基本网络配置

如果虚拟机只是需要简单的联接外网,你只需给他加添简单的网络接口即可。方法就是在启动时加上 -net nic,vlan=1 -net user,vlan=1 选项即可。比如:

 qemu -kernel-kqemu  -no-acpi -net nic,vlan=1 -net user,vlan=1 -cdrom dsl-4.3rc1.iso

注意这里只支持 TCP 和 UDP 协议。尤其注意 ICMP 协议,这里 ping 不会工作。

虚拟拓扑网络

何为虚拟拓扑网络?

VDE stands for Virtual Distributed Ethernet. It started as an enhancement of uml_switch. It is a toolbox to manage virtual networks.

The idea is to create virtual switches, which are basically sockets, and to "plug" both physical and virtual machines in them. The configuration I show here is quite simple; However, VDE is much more powerful than this, it can plug virtual switches together, run them on different hosts and monitor the traffic in the switches. Your are invited to read the documentation of the project.

The advantage of this method is you do not have to add sudo privileges to your users. Regular users should not be allowed to run modprobe.

基本配置

VDE is in the official repositories, so...

# pacman -S vde2

In my config, I use tun/tap to create a virtual interface on my host. Load the tun module (or add it to your MODULES array in rc.conf):

# modprobe tun

Now create the virtual switch:

# vde_switch -tap tap0 -daemon -mod 660 -group kvm

This line creates the switch, creates tap0, "plugs" it, and allows the users of the group kvm to use it.

The interface is plugged in but not configured yet. Just do it:

# ifconfig tap0 192.168.100.254 netmask 255.255.255.0

That is all! Now, you just have to run KVM with these -net options as a normal user:

$ qemu -net nic -net vde -hda ...

Configure your guest as you would do in a physical network. I gave them static addresses and let them access the WAN using IP forwarding and masquerading on my host:

# echo "1" > /proc/sys/net/ipv4/ip_forward
# iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE

Putting it together

I added this init script to run all this at start-up:

#!/bin/bash                                                                                        

. /etc/rc.conf
. /etc/rc.d/functions
case "$1" in
  start)
    stat_busy "Starting VDE Switch"
    vde_switch -tap tap0 -daemon -mod 660 -pidfile $PIDFILE -group kvm
    if [ $? -gt 0 ]; then
      stat_fail
    else
        echo "1" > /proc/sys/net/ipv4/ip_forward &&  \
        iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth0 -j MASQUERADE &&  \
        ifconfig tap0 192.168.100.254 netmask 255.255.255.0 && \
        stat_done || stat_fail
    fi
    ;;
  stop)
    stat_busy "Stopping VDE Switch"
    # err.. well, i should remove the switch here...
    stat_done
    ;;
  restart)
    $0 stop
    sleep 1
    # Aem.. As long as stop) is not implemented, this just fails
    $0 start
    ;;
  *)
    echo "usage: $0 {start|stop|restart}"  
esac
exit 0

Well, I know it is dirty and could be more configurable. Feel free to improve it. VDE has an rc script too, but I had to make one anyway for the IP forwarding stuff.

变通的方法

If the above method does not work or you do not want to mess with kernel configs, TUN, dnsmasq and iptables you can do the following for the same result.

vde_switch -daemon -mod 660 -group kvm
slirpvde --dhcp --daemon

Then to start the vm with a connection to the network of the host:

kvm -net nic,macaddr=52:54:00:00:EE:03 -net vde whatever.qcow

桥接网络

NOTE:这部分由User:athurg根据实际体验过程翻译,和原文的流程有比较大的出入。如果有任何问题,请联系User:athurg

简介

在QEMU中,桥接网络就是将虚拟机注册到主机的一个单独网络接口上,然后将这个接口和主机对外的网络接口一起桥接起来。如果你需要让虚拟机和外部网络完全互联互通时,就需要这么干了!不过需要提醒的是,这样的话客户机就完全暴露在网络中了。

准备

首先,请确认下面的软件包已经安装了:

 bridge-utils (for brctl, to manipulate bridges)
 uml_utilities (for tunctl, to manipulate taps)
 sudo (for manipulating bridges and tunnels as root)

接下来加载桥接模块:

 # modprobe bridge

配置

首先,修改你的网络配置。增加一个桥接网络接口br0,用它来取代原来主机上的本地链接。打开/etc/rc.conf,将网络配置部分修改如下:

 eth0="eth0 up"
 br0="dhcp"      #如果你原来的本地链接是DHCP方式获取配置的话
 #br0="br0 192.168.0.2 netmask 255.255.255.0 up"      #如果你原来的本地链接是静态设置的话
 INTERFACES=(eth0 br0)

然后,将原来的网络接口eth0绑定到桥接网络接口br0上,使得主机上的网络应用能够通过桥接网络接口,传递到外网中。编辑/etc/conf.d/bridges

 bridge_br0="eth0"
 BRIDGE_INTERFACES=(br0)

加载tap模块

 # modprobe tun

创建一个/etc/qemu-ifup脚本,用于QEMU用来创建和加载虚拟机对外访问的网络接口。

 #!/bin/sh
 
 echo "Executing /etc/qemu-ifup"
 echo "Bringing up $1 for bridged mode..."
 sudo /sbin/ifconfig $1 0.0.0.0 promisc up
 echo "Adding $1 to br0..."
 sudo /usr/sbin/brctl addif br0 $1
 sleep 2

这个脚本的属主应该设置为root:kvm,权限为750,以使得kvm组的用户(可以启动虚拟机的用户)也能执行。这里也提醒大家一次,所有需要通过qemu使用KVM虚拟机的用户,都应当加入到kvm组中。

由于这个脚本里需要调用ifconfig等命令来给虚拟机动态增减网络接口,为了方便我们在脚本里通过sudo来执行。如果不想脚本执行过程中向用户索取密码,请编辑sudo或者visudo的配置文件,加入下面这一行:

 Cmnd_Alias      QEMU=/sbin/ifconfig,/sbin/modprobe,/usr/sbin/brctl,/usr/bin/tunctl
 %kvm     ALL=NOPASSWD: QEMU

最后,将bridgetun加入到文件/etc/rc.confMODULES变量中,以便开机就能加载。

启动

一切配置妥当,我们就可以开始体验了。 启动带桥接模式网络接口的QEMU大致有三个步骤

  • 通过tunctl申请一个可用的虚拟网络接口——tap设备。
  • 配置这个设备并桥接到桥接网络接口,然后启动qemu
  • 退出qemu时,别网络删掉这个虚拟网络接口。

听上去很复杂吧?要是每次启动虚拟机都这么复杂不是很烦?我们可以创建一个脚本来实现:

USERID=`whoami`
IFACE=`sudo tunctl -b -u $USERID`

qemu -net nic -net tap,ifname="$IFACE" $*

sudo tunctl -d $IFACE &> /dev/null

需要注意一点,由于我们前面创建了/etc/qemu-ifup这个脚本,因此我们只需要把申请到的虚拟网络接口名通过-net tap,ifname="网络接口名"告诉qemu,他自己会用这个接口名为参数,调用/etc/qemu-ifup来完成桥接和配置这个网络接口的过程。

图形

QEMU 可以使用一下几个图形输出:std, cirrus, vmware, qxl, xenfs 和 vnc。

使用 vnc 选项,你可以单独运行客户机,并且通过 VNC 连接。其他选项是使用std, vmware, cirrus:

std

使用 -vga std 你可以得到最高 2560 x 1600 像素的分辨率。

vmware

尽管有一点怪怪的,但是这种方法确实比 std 和 cirrus 效果好。在客户机中,安装 VMware 驱动:

pacman -S xf86-video-vmware xf86-input-vmmouse

Windows 客户机

如果你使用微软 Windows 客户机,你也许想使用远程桌面协议(RDP)连接到客户虚拟机。使用:(如果你使用 VLAN 或与客户机不在同一个网络之中)

qemu -nographic -net user,hostfwd=tcp::5555-:3389

然后用 rdesktop 或者 freerdp 连接到客户机,例如:

xfreerdp -g 2048x1152 localhost:5555 -z -x lan

Qemu的前端

Qemu有几个图形前端,不习惯命令行的可以使用:

  • community/qemu-launcher
  • community/qemulator
  • community/qtemu

键盘不工作/方向键不工作

如果你发现一些键不工作或“按下”错误的键(尤其是方向键),你也许需要在选项中指定你的键盘布局。键盘布局可以在 /usr/share/qemu/keymaps 找到。

qemu -k [keymap] [disk_image]

启动时运行 QEMU 虚拟机

想要在启动时运行 QEMU 虚拟机,你可以使用下面的 rc-script 和配置文件.

配置文件选项
QEMU_MACHINES 要启动的虚拟机列表
qemu_${vm}_type 调用的 QEMU 二进制文件。如果指定,会被加到 /usr/bin/qemu- 之后,这个程序会被使用来启动虚拟机。也就是说,如你可以启动比如 qemu-system-arm 镜像使用 qemu_my_arm_vm_type="system-arm"。如果不指定,/usr/bin/qemu 会被使用。
qemu_${vm} 启动的 QEMU 命令行。默认带有 -name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic选项。
qemu_${vm}_haltcmd 安全关闭虚拟机的命令。我使用 -monitor telnet:.. 以及通过发送system_powerdown 到 monitor,通过 ACPI 关闭我的虚拟机。你可以使用 ssh 或其他方法。
qemu_${vm}_haltcmd_wait 等待关闭虚拟机的时间。默认是30秒。rc-script 会在这个时间后 kill qemu 进程。

配置文件例子:

/etc/conf.d/qemu.conf
# VMs that should be started on boot
# use the ! prefix to disable starting/stopping a VM
QEMU_MACHINES=(vm1 vm2)

# NOTE: following options will be prepended to qemu_${vm}
# -name ${vm} -pidfile /var/run/qemu/${vm}.pid -daemonize -nographic

qemu_vm1_type="system-x86_64"

qemu_vm1="-enable-kvm -m 512 -hda /dev/mapper/vg0-vm1 -net nic,macaddr=DE:AD:BE:EF:E0:00 \
 -net tap,ifname=tap0 -serial telnet:localhost:7000,server,nowait,nodelay \
 -monitor telnet:localhost:7100,server,nowait,nodelay -vnc :0"

qemu_vm1_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7100" # or netcat/ncat

# You can use other ways to shutdown your VM correctly
#qemu_vm1_haltcmd="ssh powermanager@vm1 sudo poweroff"

# By default rc-script will wait 30 seconds before killing VM. Here you can change this timeout.
#qemu_vm1_haltcmd_wait="30"

qemu_vm2="-enable-kvm -m 512 -hda /srv/kvm/vm2.img -net nic,macaddr=DE:AD:BE:EF:E0:01 \
 -net tap,ifname=tap1 -serial telnet:localhost:7001,server,nowait,nodelay \
 -monitor telnet:localhost:7101,server,nowait,nodelay -vnc :1"

qemu_vm2_haltcmd="echo 'system_powerdown' | nc.openbsd localhost 7101"

rc-script:

/etc/rc.d/qemu
#!/bin/bash
. /etc/rc.conf
. /etc/rc.d/functions

[ -f /etc/conf.d/qemu.conf ] && source /etc/conf.d/qemu.conf

PIDDIR=/var/run/qemu
QEMU_DEFAULT_FLAGS='-name ${vm} -pidfile ${PIDDIR}/${vm}.pid -daemonize -nographic'
QEMU_HALTCMD_WAIT=30

case "$1" in
  start)
    [ -d "${PIDDIR}" ] || mkdir -p "${PIDDIR}"
    for vm in "${QEMU_MACHINES[@]}"; do
       if [ "${vm}" = "${vm#!}" ]; then
         stat_busy "Starting QEMU VM: ${vm}"
         eval vm_cmdline="\$qemu_${vm}"
         eval vm_type="\$qemu_${vm}_type"

         if [ -n "${vm_type}" ]; then
           vm_cmd="/usr/bin/qemu-${vm_type}"
         else
           vm_cmd='/usr/bin/qemu'
         fi

         eval "qemu_flags=\"${QEMU_DEFAULT_FLAGS}\""

         ${vm_cmd} ${qemu_flags} ${vm_cmdline} >/dev/null
         if [  $? -gt 0 ]; then
           stat_fail
         else
           stat_done
         fi
       fi
    done
    add_daemon qemu
    ;;

  stop)
    for vm in "${QEMU_MACHINES[@]}"; do
      if [ "${vm}" = "${vm#!}" ]; then
        # check pidfile presence and permissions
        if [ ! -r "${PIDDIR}/${vm}.pid" ]; then
          continue
        fi

        stat_busy "Stopping QEMU VM: ${vm}"

        eval vm_haltcmd="\$qemu_${vm}_haltcmd"
        eval vm_haltcmd_wait="\$qemu_${vm}_haltcmd_wait"
        vm_haltcmd_wait=${vm_haltcmd_wait:-${QEMU_HALTCMD_WAIT}}
        vm_pid=$(cat ${PIDDIR}/${vm}.pid)
  
        # check process existence
        if ! kill -0 ${vm_pid} 2>/dev/null; then
          stat_done
          rm -f "${PIDDIR}/${vm}.pid"
          continue
        fi

        # Try to shutdown VM safely
        _vm_running='yes'
        if [ -n "${vm_haltcmd}" ]; then
          eval ${vm_haltcmd} >/dev/null

          _w=0
          while [ "${_w}" -lt "${vm_haltcmd_wait}" ]; do
            sleep 1
            if ! kill -0 ${vm_pid} 2>/dev/null; then
              # no such process
              _vm_running=''
              break
            fi
            _w=$((_w + 1))
          done

        else
          # No haltcmd - kill VM unsafely
          _vm_running='yes'
        fi

        if [ -n "${_vm_running}" ]; then
            # kill VM unsafely
            kill ${vm_pid} 2>/dev/null
            sleep 1
        fi

        # report status
        if kill -0 ${vm_pid} 2>/dev/null; then
          # VM is still alive
          #kill -9 ${vm_pid}
          stat_fail
        else
          stat_done
        fi

        # remove pidfile
        rm -f "${PIDDIR}/${vm}.pid"
      fi
    done
    rm_daemon qemu
    ;;

  restart)
    $0 stop
    sleep 1
    $0 start
    ;;

  *)
    echo "usage: $0 {start|stop|restart}"

esac

External links