User:Nd43/Scripted QEMU setup without libvirt
Instead of setting up a virtual machine with libvirt, it can also be done with a bash script containing at least a QEMU command with custom parameters for running the VM. This is desirable for some use cases where the flexibility for usage with other scripts is needed.
Creating the virtual machine
First install QEMU and ovmf-gitAUR, then follow the QEMU#Creating new virtualized system guide for creating the hard disk image for the virtual machine. After this proceed to create the script file for booting up the VM for the operating system installation.
Example script for the operating system installation
In the following example script the UEFI variables provided by ovmf-gitAUR are copied to a temporary file in /tmp/my_vars.bin
to be used by the VM and then QEMU is started with the x86_64 architecture emulation with following parameters:
-enable-kvm
for KVM full virtualization support-m 8G
sets the guest startup RAM size to 8 gigabytes-smp cores=4,threads=1
simulate an SMP system with a specified number of cores and threads-cpu host,kvm=off
forwards the host's CPU model info to the VM and disables exposing the hypervisor via the MSR hypervisor nodes-vga none
disables VGA card emulation-monitor stdio
sets QEMU to always use stdio for the monitor console used to control the VM power states etc.-display none
disables software level display output as we're using the actual GPU right from the start-usb -usbdevice host:xxxx:xxxx
enables the USB driver and passes through the specified input devices like a keyboard and a mouse identified by their bus.addr-device vfio-pci,host=xx:xx.x
passes through the GPU and its related devices-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_x64.bin
loads the OVMF firmware-drive if=pflash,format=raw,file=/tmp/my_vars.bin
loads the UEFI variables-drive file=/home/user/VM.img,index=0,media=disk,if=virtio,format=raw
tells the VM to use the newly created hard disk image-drive file=/home/user/downloads/archlinux-2017.04.01-x86_64.iso,index=1,media=cdrom
point a path to the operating system installation media ISO file to be used with the virtual CD-ROM drive
Backslashes are used to divide the oneliner command to multiple lines for better readability and easier modification later on.
vmscript.sh
#!/bin/bash cp /usr/share/ovmf/x64/ovmf_vars_x64.bin /tmp/my_vars.bin qemu-system-x86_64 \ -enable-kvm \ -m 8G \ -smp cores=4,threads=1 \ -cpu host,kvm=off \ -vga none \ -monitor stdio \ -display none \ -usb -usbdevice host:04d9:0125 \ -usb -usbdevice host:046d:c05a \ -device vfio-pci,host=07:00.0,multifunction=on \ -device vfio-pci,host=07:00.1 \ -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/ovmf_x64.bin \ -drive if=pflash,format=raw,file=/tmp/my_vars.bin \ -drive file=/home/user/VM.img,index=0,media=disk,if=virtio,format=raw \ -drive file=/home/user/downloads/archlinux-2017.04.01-x86_64.iso,index=1,media=cdrom
After creating the script in a preferred location, make sure you've connected the passed through GPU to a monitor input and start the virtual machine by issuing /path/to/vmscript.sh
or simply ./vmscript.sh
inside a terminal as root. For running the script without root access correct permissions have to be set as explained in the next section.
Setting device permissions to run the script as another user
For many cases it's not advisable to keep running the script as the root user. For other user(s) to be able to access the VFIO and USB devices assigned to the VM, some custom udev rules need to be written. Create a new udev rule file and define the following device permissions as needed, replacing user
with your actual username and ATTRS{idVendor}
/ATTRS{idProduct}
values to match your USB input devices' bus.addr identifiers (see Udev#List attributes of a device):
/etc/udev/rules.d/10-qemu-hw-users.rules
SUBSYSTEM=="vfio", OWNER="user" SUBSYSTEM=="usb", ATTRS{idVendor}=="04d9", ATTRS{idProduct}=="0125" OWNER="user" SUBSYSTEM=="usb", ATTRS{idVendor}=="046d", ATTRS{idProduct}=="c05a" OWNER="user"
Follow Udev#Loading new rules to apply the changes.
Further VM configuration
The script created earlier is a good start for running virtual machines, but it's merely just a base configuration to be modified to fit individual systems. For clarity it was introduced here with a very minimal set of parameters to have it running and this section describes the customization needed to create a fully working VM environment for the complex needs of gaming systems and such.
Audio
Getting audio to work perfectly (especially on Windows guests) is probably the most problematic part of the current state of GPU passthrough setups. You can get pretty close with software solutions, but if total audio integrity is important for your setup, consider acquiring an USB or PCI soundcard to passthrough for the VM.
Windows guests
Linux guests
Audio playback for Linux guests is easily configured and functionally perfect when using PulseAudio. Just add export QEMU_AUDIO_DRV="pa"
before the qemu-system-x86_64
command and add a -soundhw hda
switch for the audio device selection:
... export QEMU_AUDIO_DRV="pa" qemu-system-x86_64 \ -soundhw hda \ ...
While this indeed makes the playback work correctly, there is a awfully noticeable input lag present with unpatched QEMU when using microphones etc. with the guest. One of the simplest solutions is to ditch the audio configuration on the VM script and instead connect the guest system to the host's PulseAudio server over the network: see PulseAudio/Examples#PulseAudio over network.
IO performance
Network
It's often desireable to assign a typical LAN IP address for GPU passthrough VMs like they'd be individual devices on the network, needed by Steam In-Home Streaming solution for example. This can be easily done by using a network bridge interface, see QEMU#Networking for more information about the configuration.
Autostarting the VM at boot time with a systemd service
In many use cases it can be practical to start the QEMU script automatically during every boot so the virtual machine is loaded for use without needing to access the host directly at all. This is especially useful if you want to make your VM easily available for other users for gaming etc. without giving them any access to the host system.
In the following example a systemd user service is used to start a tmux session, where the script gets executed. The tmux session starts in a detached stated to the background and it can be accessed later on the host locally or with a remote connection via SSH or such.
Creating the tmux script
For initiating the tmux session, another simple script shall be created:
tmux new-session
starts the tmux session-d
starts the session in detached mode-s passthrough
defines the session name for easier identification later
tmux send-keys '/home/user/vmscript.sh' Enter
starts the VM script inside the tmux session, still leaving the tmux session open if the script gets stopped by interaction or by shutting down the VM
tmuxscript.sh
#!/bin/bash tmux new-session -d -s passthrough tmux send-keys '/home/user/vmscript.sh' Enter
Save the script to a preferred location, make it executable and run it as your user to see it starts up the VM correctly. You can attach the tmux session with a command tmux attach -t passthrough
.
Creating the systemd user service
A simple systemd user service file shall be created for managing the tmux session script execution:
Type=oneshot
andRemainAfterExit=yes
explained in Systemd#Service typesWantedBy=default.target
explained in Systemd/User#How it works
~/.config/systemd/user/tmuxsession.service
[Unit] Description=Tmux session startup with a QEMU VM script/monitor [Service] Type=oneshot RemainAfterExit=yes ExecStart=/home/user/tmuxscript.sh [Install] WantedBy=default.target
To start user services at boot time, lingering has to be enabled first:
$ loginctl enable-linger
Then the service can be managed with systemctl
, e.g. set it up to be enabled at boot:
$ systemctl --user enable tmuxsession.service