QEMU/Guest graphics acceleration

From ArchWiki

There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.

Methods for QEMU guest graphics acceleration

QXL video driver and SPICE client for display

QXL/SPICE is a high-performance display method. However, it is not designed to offer near-bare metal performance.

PCI GPU passthrough

PCI VGA/GPU passthrough via OVMF

PCI passthrough currently seems to be the most popular method for optimal performance. This forum thread (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.

Single GPU Passthrough

Currently PCI passthrough works for dual-graphic cards only. However there is a workaround for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest.

When you start the VM all your gui apps will be force terminated however as workaround you can use Xpra to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.

in case you have NVIDIA GPU, you may need to dump your GPU's vbios using nvflashAUR and patch it using vBIOS Patcher.

Looking Glass

There is a fairly recent passthrough method called Looking Glass. See this guide to getting started which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.

GPU Virtualization

LIBVF.IO

LibVF.IO is a Virtualization Framework (Libvirt's alternative) for simplifying the GPU Virtualization. It support Intel (Intel GVT-g, SR-IOV), NVIDIA (Nvidia VGPU, SR-IOV) and AMD (AMD SR-IOV). You have to create YAML configurations for each virtual machine. Currently Intel and NVIDIA GPUs are tested, with limited support for AMD. You can follow this setup guide. you can also check their Wiki. For NVIDIA GPU, you need to Unlock VGPU which can be done by installing nvidia-merged-dkmsAUR or building it yourself and putting it in LIBVF.IO's Optional Folder.

There is also LIME (LIME Is Mediated Emulation) for executing Windows Apps in Linux.

This framework was tested for gaming. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that through YAML configuration.

Tip: In case you have Ryzen CPU, You have to enable ignore_msrs to avoid Windows BSOD. Always double check your guest driver version. for Nvidia GPU, Make sure nvidia-vgpud and nvidia-vgpu-mgr services are running!

NVIDIA vGPU

By default NVIDIA disabled the vGPU for consumer series (if you own an enterprise card go ahead). However you can unlock VGPU for your consumer card.

You will also need a VGPU license, however there are some workarounds.

Follow this guide to manually setup a Windows 10 guest with NVIDIA VGPU.

SR-IOV

Single Root I/O Virtualization is under development by Intel and NVIDIA New GPU Series. There are some AMD GPU which supports this technology such as W7100.

Intel-specific iGVT-g extension

iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (Broadwell and newer). For more information, see Intel GVT-g.

Virgil3d virtio-gpu paravirtualized device driver

[1] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to non-graphics virtio drivers (see virtio driver information and virtio Windows guest drivers). For Linux guests, virtio-gpu is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See this Reddit Arch thread and Gerd Hoffmann's blog for using this with libvirt and spice.

For Windows guests, there is very little information on VirtIO-gpu OpenGL drivers but there is a report that Red Hat abandoned work on it. There is also a project summary, the DOD (Windows kernel) driver and the ICD (Windows userland) driver are available. In addition, see this Phoronix article and its comments.