https://wiki.archlinux.org/api.php?action=feedcontributions&user=Muata&feedformat=atomArchWiki - User contributions [en]2024-03-29T07:47:52ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF/Examples&diff=602530PCI passthrough via OVMF/Examples2020-03-25T10:55:28Z<p>Muata: /* Muata's VFIO setup */ ck -> zen</p>
<hr />
<div>[[Category:Virtualization]]<br />
As PCI passthrough is quite tricky to get right (both on the hardware and software configuration sides), this page presents '''working, complete''' VFIO setups. Feel free to look up users' scripts, BIOS/UEFI configuration, configuration files and specific hardware. If you have a problem, it might have been stumbled upon by other VFIO users and fixed in the examples below.<br />
<br />
{{note|If you have got VFIO working properly, please post your own setup according to the template on the bottom.}}<br />
<br />
== Users' setups ==<br />
<br />
=== mstrthealias: Intel 7800X / X299, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7800X CPU <br />
* '''Motherboard''': ASRock X299 Taichi (Revision: A, BIOS/UEFI Version: 1.60A)<br />
* '''GPU''': Asus STRIX GTX 1070<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version 4.14.8-1-skx (patched crystal_khz=24000).<br />
** Custom patches:<br />
*** skylakex-crystal_khz-24000.patch (see below)<br />
** Patches used from linux-ck:<br />
*** enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v4.13+.patch<br />
*** 0001-add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by.patch<br />
*** 0001-e1000e-Fix-e1000_check_for_copper_link_ich8lan-retur.patch<br />
*** 0002-dccp-CVE-2017-8824-use-after-free-in-DCCP-code.patch<br />
** Config:<br />
*** PREEMPT, NO_HZ_IDLE, 300HZ, MSKYLAKE<br />
* GitHub: Link TBD<br />
* Benchmarks: https://imgur.com/a/hIfQD<br />
* Using '''libvirt/QEMU''': libvirt 3.10.0 / QEMU 2.11.0<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
** Skylake-X default clock incorrect in 4.14.8 (https://bugzilla.kernel.org/show_bug.cgi?id=197299)<br />
*** Was unable to resolve timing issue using adjtimex<br />
*** Patching kernel source to '''crystal_khz = 24000''' resolved timing/performance issues<br />
** Enable 'Intel SpeedShift' in BIOS, installed '''cpupower'''', set governor='performance'<br />
*** Verify: dmesg|grep HWP<br />
**** intel_pstate: HWP enabled<br />
** Enable HT in BIOS<br />
** Enable 'deadline' IO sceduler:<br />
*** echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="deadline"' >> /etc/udev/rules.d/60-schedulers.rules<br />
** Bypass x2apic opt-out:<br />
*** GRUB_CMDLINE_LINUX="... intremap=no_x2apic_optout ..."<br />
** Isolate cores for Windows VM:<br />
*** GRUB_CMDLINE_LINUX="... isolcpus=2-5,8-11 nohz_full=2-5,8-11 rcu_nocbs=2-5,8-11 ..."<br />
** Use hugepages (2MB) for all VM memory allocation<br />
** memoryBacking: <hugepages/><nosharepages/><locked/><access mode='private'/><allocation mode='immediate'/><br />
** Extracted rom from GPU; used for <rom file=../> config<br />
** Using MSI for GPU and GPU Audio (configured in Windows registry; FPS seems same as using line-based interrupts)<br />
* Hardware setup<br />
** PCIE1: NVIDIA GeForce GT 710B (for host)<br />
** Onboard: ASRock XHCI 3.1 USB (for host)<br />
** Onboard: Intel I219 NIC (bridged)<br />
** PCIE3: Asus Xonar STX (passthrough to Win10)<br />
** PCIE5: NVIDIA GeForce GTX 1070 (passthrough to Win10)<br />
** M2_1: Samsung 960 EVO 500GB (passthrough to Win10)<br />
** Onboard: Intel XHCI USB 3.0 (passthrough to Win10)<br />
** Onboard: Intel HDA (passthrough to Win10)<br />
** Onboard: Intel I211 NIC (passthrough to Win10)<br />
** Onboard: ASRock AHCI SATA A1/A2 (passthrough to Linux)<br />
<br />
=== DragoonAethis: 6700K, GA-Z170X-UD3, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K (using iGPU as the host GPU)<br />
* '''Motherboard''': Gigabyte GA-Z170X-UD3 (Revision 1.0, BIOS/UEFI Version: F23d)<br />
* '''GPU''': MSI GeForce 1070 Gaming X (10Gbps)<br />
* '''RAM''': 16GB DDR4 2400MHz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': "Vanilla" Linux (no ACS patch needed).<br />
* Using '''libvirt''': XML domain, helper scripts, IOMMU groups, etc available in [https://github.com/DragoonAethis/VFIO my VFIO repository].<br />
* '''Guest OS''': Windows 8.1 Pro.<br />
* The entire HDD is passed to the VM as a raw device (formatted as a single NTFS partition).<br />
* USB keyboard and mouse are passed to the guest VM and shared with the host with Synergy.<br />
* Virtualized audio: PulseAudio -> local Unix socket. Previously, I've had a bit more complex setup in which PA on the host was configured to accept TCP connections, and the envvars required for QEMU to use PA were pointed at the PA server running on 127.0.0.1. This way it was not required to change the QEMU user (exact details in the repo), but introduced other minor issues I've resolved later.<br />
* Bridged networking (with NetworkManager's and [https://www.happyassassin.net/2014/07/23/bridged-networking-for-libvirt-with-networkmanager-2014-fedora-21/ this tutorial's] help) is used. {{ic|bridge0}} is created, {{ic|eth0}} interface is bound to it. STP disabled, VirtIO NIC is configured in the VM and that VM is seen in the network just as any other computer (and is being assigned an IP address from the router itself, can communicate freely with other computers).<br />
* For some reason, enabling intel_iommu=on on the kernel cmdline without CSM support enabled in UEFI causes a black screen on boot. Enable it (Windows 8/10 features need to be enabled to show "CSM Support", selecting "Other OS" hides that).<br />
<br />
=== Manbearpig3130's Virtual Gaming Machine ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6850K 3.6GHz<br />
* '''Motherboard''': Gigabyte x99-Ultra Gaming (Revision 1.0, BIOS/UEFI Version: F4)<br />
* '''Host GPU''': AMD Radeon HD6950 1GB<br />
* '''Guest GPU''': AMD R9 390 8GB<br />
* '''RAM''': 32GB G-Skill Ripjaws DDR4 runing at 3200MHz<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.7.2-1.<br />
* Using '''libvirt QEMU/KVM with OVMF''': link to domain XMLs/scripts/notes: https://github.com/manbearpig3130/MBP-VT-d-gaming-machine<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 2x 480GB SSDs set up in LVM striped mode (with mdadm) formatted to ext4 are mounted in linux which contains the guest's qcow2 virtual VirtIO disk file.<br />
* USB Host controller is passed through, giving most USB ports to the VM, leaving my USB 3.1 controller with attached USB hub for the host.<br />
* Motherboard has two NICs, one is passed into VM (Works perfectly after installing Killer NIC Driver).<br />
* VM gets dedicated 16GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably.<br />
* Windows boots straight into Steam big picture mode on primary display (43" Sony Bravia). Overall an awesome gaming machine that meets my gaming needs and lust for GNU/Linux at the same time.<br />
* '''Quirks''':<br />
* I sometimes have to reinstall the AMD drivers in Windows to get HDMI audio working properly, or roll back to Windows HDMI driver. I normally use a USB headset which works fine anyway.<br />
<br />
=== Bretos' Virtual Gaming Setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-7700k<br />
* '''Motherboard''': Z270 GAMING M3 (MS-7A62)<br />
* '''GPU''': ASUS GeForce GTX960<br />
* '''RAM''': Kingston HyperX 3x8GB DDR4 2.4GHz<br />
* '''Storage''': 2x Corsair MP500 m.2 240G SSDs in mdadm RAID0, 1x WD Black 1TB for storage. 100GB LVM volume as writeback cache for HDD <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': vanilla<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* Using '''libvirt/QEMU''': GitHub config repository: [https://github.com/Bretos/vfio]<br />
* Issues you have encountered: AUDIO. Had to get USB audio adapter and pass it through.<br />
* No issues other than audio. Works like a charm.<br />
<br />
=== Skeen's Virtual Gaming Rack Machine ===<br />
<br />
Still work in progress.<br />
<br />
Hardware:<br />
<br />
* '''CPU''': AMD FX(tm)-8350<br />
* '''Motherboard''': MSI 970A SLI Krait Edition (MS-7693) (Revision 5.0, BIOS/UEFI Version: 25.4)<br />
* '''Host GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''Guest GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''RAM''': 2x8GB Kingston HyperX Fury White DDR3 1866MHz<br />
* '''Storage''': 2x250GB Samsung EVO (MZ-75E250) set up in LVM striped mode (with mdadm), 2x1TB WD Blue (WDC_WD10SPCX) for storage. 250GB LVM volume as writeback cache for HDD.<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Linux 4.9.0-3 (No ACS)<br />
* '''Host OS''': Debian Stretch<br />
* '''Guest OS''': Windows 10 Home (10_1703_N, International Edition)<br />
* Using '''libvirt QEMU/KVM with OVMF''': See [https://github.com/Skeen/libvirt-gpu-passthrough Github]<br />
<br />
Issues you have encountered:<br />
<br />
* Identifical GPUs; solved [[PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs|using this section on the wiki]], but with the script from the [[Talk:PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs_-_did_not_work_for_me.|corresponding discussion page]]. Several adaptations for Debian were required too, but are not applicable for this forum.<br />
* "Error 43: Driver failed to load";<br />
** Spoofing vendor_id caused Windows to crash during boot-up.<br />
** Linux VMs complain unable to find GPU from Grub2, and booted into blind-mode, but would still pick up the graphics card during the boot process, and would remain functional until VM reboot.<br />
** Vendor_id spoofing turned out to work after solving the real problem (Missing UEFI compatability in VBIOS).<br />
* Missing UEFI (OVMF) compatability in VBIOS;<br />
** Requested a GOP/UEFI compatible VBIOS upgrade from ASUS, but ASUS could neither understand the request, or provide the upgrade (The only thing supplied was standard support answers).<br />
** No compatible VBIOS was found at [https://www.techpowerup.com/vgabios/ TechPowerUp].<br />
** Finally solved by manually hacking GOP/UEFI support into the ROM, using [http://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html GOPupd]. Current rom was dumped within a Windows 10 VM using GPU-Z, then modified using GOPupd, pulled to Linux, and provided using the rom file parameter in the VM XML file.<br />
* VM only uses one core (even with mode=host-passthrough): solved [[PCI_passthrough_via_OVMF#VM_only_uses_one_core|using this section on the wiki]].<br />
<br />
Quirks:<br />
<br />
* The GPU that is being passed through, [[PCI_passthrough_via_OVMF#Passing_through_a_device_that_does_not_support_resetting|does not support resetting]], and thus doing a hard-reboot / shutdown of the VM locks the GPU.<br />
** The VM cannot be started again unless the Host machine is rebooted.<br />
*** When doing a clean reboot / shutdown, allows the VM to start up as expected without reboot..<br />
** Removing and rescanning the PCI device, does not change anything.<br />
** No further attempts at powercycling the GPU from the host has been done (Yet).<br />
* [[PCI_passthrough_via_OVMF#Passing_VM_audio_to_host_via_PulseAudio|Passing VM audio to host via PulseAudio]] results in heavy crackling.<br />
** Using [[PCI_passthrough_via_OVMF#Slowed_down_audio_pumped_through_HDMI_on_the_video_card|Message-Signaled Interrupts]] have not been attempted (Yet).<br />
<br />
=== droserasprout poor man's setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i3-6100<br />
* '''Motherboard''': ASRock H110M2 D3 (BIOS version 0603)<br />
* '''Host GPU''': Intel HD 530<br />
* '''Guest GPU''': Sapphire Radeon R7 360<br />
* '''RAM''': Apacer 8Gb 75.C93DE.G040C, Kingston 4Gb 99U5401-011.A00LF<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-lts 4.9.67-1 (vanilla)<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro 1709 (build 16299.98)<br />
* Using '''libvirt/QEMU''': See my configs and IOMMU groups on [https://github.com/droserasprout/win10-vfio-configs Github]<br />
* HDD partition is passed to the VM as a raw virtio device.<br />
* HD Audio is passed too. Works fine with both playing and recording, no latency issues or glitches. After VM is powered off host audio works fine too.<br />
* Guest's latency is slightly better when CPU cores are isolated for VM.<br />
* i2c-dev module added to bypass 'EDID signature' error when switching HDMI. Without it I had to switch video output before starting VM for some reason.<br />
* intremap=no_x2apic_optout kernel option added to bypass motherboard firmware falsely reporting x2APIC method is not supported. Seems to have a strong influence on the guest's latency.<br />
* Overall performance is pretty close to the native OS setup.<br />
<br />
=== prauat: 2xIntel(R) Xeon(R) CPU E5-2609 v4, 2xGigabyte GeForce GTX 1060 6GB G1 Gaming, Intel S2600CWTR ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''':2xIntel(R) Xeon(R) CPU E5-2609 v4 <br />
* '''Motherboard''': Intel S2600CWTR(Revision ???, BIOS/UEFI Version: SE5C610.86B.01.01.0022.062820171903)<br />
* '''GPU''': 2xGigabyte GeForce GTX 1060 6GB G1 Gaming [GeForce GTX 1060 6GB] (rev a1)<br />
* '''RAM''': Samsung M393A2G40EB1-CPB 2133 MHz 64GB (4x16GB)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Linux 4.14.15-1-ARCH #1 SMP PREEMPT<br />
* Using '''libvirt/QEMU''': https://github.com/prauat/passvm/blob/master/generic.xml<br />
* Most important:<br />
* When using nvidia driver hide virtualization to guest <kvm><hidden state='on'/></kvm><br />
* Configuration works with Arch Linux guest os, still work in progress.<br />
<br />
=== Dinkonin's virtual gaming/work setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7700K CPU @ 4.60GHz<br />
* '''Motherboard''': MSI Z270 GAMING PRO CARBON (MS-7A63) BIOS Version: 1.80<br />
* '''GPU''': 1x Gigabyte GeForce GTX 1050 2GB (host), 1x MSI GeForce 1080 AERO 8GB(guest)<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version linux 4.15.2-2-ARCH.<br />
* Using '''libvirt/QEMU (patched from AUR) with OVMF'''<br />
* Installed qemu-patched from AUR because of crackling/delayed sound with pulseaduio (still hear ocasional pops/clicks while gaming.<br />
* Patched video bios with https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher, because of error: <br />
vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff<br />
* Single monitor setup, implemented full software KVM(for host and guest) described here: https://rokups.github.io/#!pages/full-software-kvm-switch.md<br />
<br />
=== pauledd's unexeptional setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7 6700K<br />
* '''Motherboard''': Gigabyte GA-Z170N-WIFI Retail (Revision 1.0 , BIOS/UEFI Version: F20)<br />
* '''GPU''': 8GB Palit GeForce GTX 1070 Dual Aktiv PCIe 3.0 x16 (Retail)<br />
* '''RAM''': 16GB G.Skill RipJaws V DDR4-3200 DIMM CL16 Dual Kit<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.15.2-gentoo<br />
* Using '''libvirt/QEMU''': libvirt-4.0.0, qemu-2.11.1, https://github.com/pauledd/GPU-Passthrough/blob/master/win10-2.xml , using vfio kernel module<br />
* Had to dump VBIOS in at the host while GPU was normally attached (and drivers loaded) (see https://stackoverflow.com/a/42441234), had to set CPU settings manually according to my cpu (host-passthrough, sockets 1, cores: 4, threads: 2 ) or some games will regularly crash, see my xml how to insert vbios, still have audio clicking/lag with pulseaudio but thats ok for me, no further patching etc.. works out of the box without any issues.<br />
* 3DMark Results Time Spy Graphic Score: Native Windows 10: 5564 , GPU-Passthrough: 5541<br />
<br />
=== hkk's Windows gaming machine (6700K, 1070, 16GB) ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K 4.5GHz<br />
* '''Motherboard''': AsRock Fatality Gaming K6 Z170 (rev. 1.05)<br />
* '''Host GPU''': Intel GPU HD530 with 1GB shared memory<br />
* '''Guest GPU''': Gigabyte GeForce GTX1070 G1 Gaming 8GB<br />
* '''RAM''': 16GB G.Skill RipjawsV @ 3333 MHz CL14-15-15-31-2T [DDR4]<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.15.7-1-vfio (with ACS patch included).<br />
* Using '''libvirt QEMU/KVM with OVMF'''<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 128GB Intel 600p SSD splited into 3 partitions: 512MB for EFI, 30GB for / in Btrfs and other gigs for Windows 10 installed straight on SSD.<br />
* Two more HDDs for Windows. 1TB and 650GB<br />
* Passed specific devices like X360 and some of single USB ports.<br />
* One NIC behind NAT on VM machine. <br />
* VM gets dedicated 8GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably and machine gets 4/4 cores of my 4/8 CPU<br />
* Windows boots on second screen with simple script which shutting down display with xrandr.<br />
* Using Synergy to share mouse and keyboard between systems.<br />
* '''Quirks''':<br />
* Synergy is not perfect and will not entirely work in some games.<br />
* No boot screen. Display is turning on only when Windows is up and ready to go.<br />
<br />
=== sitilge's treachery ===<br />
<br />
Full info: https://git.sitilge.id.lv/sitilge/dotfiles<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i5 6600K<br />
* '''Motherboard''': Asus Z170i<br />
* '''GPU''': Gigabyte Radeon RX460 OC 2GB<br />
* '''Storage''': Samsung 850 EVO 500GB<br />
* '''RAM''': Corsair 16GB DDR4<br />
* '''Mouse, Keyboard''': Logitech M90, Vortex Pok3r<br />
<br />
Host Configuration:<br />
<br />
* '''Kernel''': linux-vfio<br />
* '''Packages''': qemu-git, virtio-win, ovmf<br />
<br />
Guest Configuration:<br />
<br />
* '''OS''': Windows 10 Pro<br />
* '''CPU''': host<br />
* '''Motherboard''': host<br />
* '''GPU''': passthrough<br />
* '''Storage''': 64GB<br />
* '''RAM''': 8GB<br />
* '''Mouse, Keyboard''': passthrough<br />
<br />
Notes:<br />
<br />
* You can easy simlink the config files using {{ic|stow -t / boot mkinitcpio}} and then {{ic|mkinitcpio -p linux-vfio}}.<br />
* {{ic|-smp cores&#61;4}} - guest might utilize only one core otherwise.<br />
* {{ic|-soundhw ac97}} - I'm passing mobo audio thus ac97. Download, unzip and install the Realtek AC97 drivers within a guest.<br />
* Use virtio drivers for both block devices and network. For example, the ping went down from 250 to 50.<br />
* Mouse and keyboard passthrough solved the terrible lag problem which was present in emulation mode.<br />
* Make sure virtualization is supported and enabled in your firmware (UEFI). The option was hidden in a submenu in my case.<br />
* As trivial as it sounds, check your cables.<br />
* Be patient - it took more than 10 minutes for the guest to recognize the GPU.<br />
<br />
=== chestm007's hackery ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 7 1800x<br />
* '''Motherboard''': Asus ROG Crosshair VI (Revision 1, BIOS/UEFI Version: 3502)<br />
* '''GPU''': Asus ROG RX480oc 8GB<br />
* '''RAM''': 32gb Ripjaws 2400mhz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.16.12-1-ARCH.<br />
* Using '''libvirt/QEMU''': libvirtd (libvirt) 4.3.0, QEMU emulator version 2.12.0, <br />
<br />
Notes: <br />
<br />
* using ic6 audio - works fine for me.<br />
* have a working looking-glass setup, however cant get spice to pass through keyboard and mouse, currently using a mixture of synergy and a dedicated screen as a workaround<br />
<br />
=== Eduxstad's Infidelity ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 2600X @ 3.7 GHZ <br />
* '''Motherboard''': ASUS PRIME B350-PLUS(BIOS/UEFI Version: 4011)<br />
* '''GPU1 (Guest)''': MSI 390 8GB @ Stock<br />
* '''GPU2 (Host)''': XFX 550 4GB @ Stock<br />
* '''RAM''': 2 x 8GB (16GB) @ 3000 HZ<br />
* '''Guest OS''': Windows 8.1 Embedded Pro<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.17.3-1-ARCH (vanilla).<br />
* Using '''libvirt/QEMU''': libvirt/virt-manager (https://github.com/eduxstad/vfio-config).<br />
* Look in the repository for complete documentation of extra steps taken<br />
* Overview: VM managed using virt-manager, using looking glass for primary io and built in spice display server as backup. Passing vm audio back to pulseaudio. Using hugepages for RAM. SCSI Drivers installed for hardware drive support.<br />
<br />
=== Pi's vr-vm ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8700k @ 4.8 GHz<br />
* '''Motherboard''': MSI Gaming Pro Carbon (BIOS/UEFI Version: A.40/5.12)<br />
* '''GPU''': Palit RTX 2080 Ti<br />
* '''RAM''': 4x8GB G.Skill DDR4 @ 3000 MHz<br />
<br />
Configuration:<br />
<br />
* Kernel: latest mainline (rc if available)<br />
** custom built with ZFS, WireGuard<br />
** ''CONFIG_PREEMPT_VOLUNTARY=y'' to work around QEMU bug with long guest boot times<br />
* Startup scripts/additional info: https://github.com/PiMaker/Win10-VFIO<br />
* Issues encountered:<br />
** PUBG would not launch at all<br />
*** Solution: Enable the HyperV clock with <timer name='hypervclock' present='yes'/> and disable hpet with <timer name='hpet' present='no'/><br />
** VR would start to stutter badly after about 20-30 minutes of playtime (this one took me about 2 weeks to finally figure out :-)<br />
*** Solution:<br />
**** Enable invariant tsc passthrough with <feature policy='require' name='invtsc'/> (required even if using host-passthrough!)<br />
**** Enable MSI for the GPU (using tool from [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ here])<br />
**** Enable vAPIC and synic in the HyperV configuration<br />
**** Manually move all IRQs to host cores using qemu_fifo.sh script from my GitHub repo above<br />
* Overview: SteamVR-capable gaming and workstation rig, passing through NVIDIA GPU and onboard USB-controller (leaving an additional ASMedia USB port to the host). 22 GB hugepages memory, 10 of 12 cores (with SMT) passed through. Audio working via Scream (https://github.com/duncanthrax/scream) - with IVSHMEM, surprisingly low latency and no stutters.<br />
<br />
=== coghex's gaming box ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8086k @ 5.0 GHZ (8086k is just a binned 8700k)<br />
* '''Motherboard''': GIGABYTE Z370 AORUS Gaming 7 rev1.0 (BIOS/UEFI Version: F15a)<br />
* '''GPU''': GIGABYTE GV-N108TAORUSX WB-11GD AORUS GeForce GTX 1080 Ti Waterforce WB Xtreme Edition 11G @ ~2Ghz<br />
* '''RAM''': 4 x 8GB (32GB) Corsair Dominator Platinum @ 3600 HZ (XMP)<br />
Configuration:<br />
<br />
* '''Kernel''': linux-zen-5.5.8.zen1-1<br />
* '''Modules''': raid0 raid1 md_mod ext4 vfat ahci vfio_pci vfio vfio_iommu_type1 vfio_virqfd usbhid it87 (aur version is unmaintained and the support for the ITE8686E chip on this board is limited, replace it87 source with that which is found [https://github.com/andreychernyshev/it87-8613E here] for more comprehensive support)<br />
* '''Virsh''': virsh-5.10.0<br />
* '''Qemu''': qemu-system-x86_64-4.2.0 machine='pc-i440fx-4.2'<br />
* '''Performance Services''': [[Improving_performance#irqbalance|irqbalance-1.6.0]], [[Improving_performance#Ananicy|ananicy-git-2.1.0.r22]], [[CPU_frequency_scaling#cpupower|cpupower5.5-1]]<br />
* EDIT(2020): much has changed since this setup was posted years ago and a custom kernel is no longer needed on this hardware, everything works perfectly...<br />
* scripts, libvirt XML, and personal configs can be found here: https://github.com/coghex/hoest<br />
* host boot options: intel_iommu=on iommu=pt rd.driver.pre=vfio-pci acpi_enforce_resources=lax<br />
* systemd modprobe.d options: kvm ignore_msrs=1 (avoids critical bugs), kvm report_ignored_msrs=N (cleans up journal logs)<br />
* libvirt features: acpi, apic, kvm hidden state='on', vmport state='off'<br />
* guest hyper-v options: hv-relaxed, hv-vapic, hv-spinlocks (retries='8191'), hv-vpindex, hv-runtime, hv-synic, hv-stimer, hv-stimer-direct, hv-reset, hv-vendor_id (value='1234567890ab'), hv-frequencies, hv-reenlightenment, hv-tlbflush, hv-ipi, (hv-evmcs and hv-no-nonarch-coresharing seemingly do not work yet in virsh)<br />
* make sure to use the multifunction field for the GPU's hdmi audio controller and set them to the same slot, otherwise the audio interrupts will hang. Someone should probably add that to the guide...<br />
* im running the clock at 100Hz, the people running it at 1000 with the zen or ck kernel should know that the MuQSS scheduler works the same regardless of this speed and 1000 will just add more useless interrupts.<br />
* cpu pinning works best for single VM performance, default host-passthrough works best for multiple running VMs.<br />
* on windows, the [https://github.com/CHEF-KOCH/MSI-utility MSI_util_v2] gets used every update to reset MSI interrupts on the GPU.<br />
Hardware Specific:<br />
<br />
* '''Fully-Functional Passthrough Devices''': this motherboard has many PCI slots, all of these devices have been working flawlessly with little setup for years now:<br />
** Inatek USB Card: KT5001 [https://www.amazon.com/Inateck-Express-15-Pin-Connector-KT5001/dp/B00FPIMJEW]<br />
** Creative Sound Card: 70SB155000001 [https://www.amazon.com/Creative-Labs-70SB155000001-Blaster-PCI-Express/dp/B01LYT7U99]<br />
** EDUP WiFi Card: AC9636GS (must use virtio usb passthrough for bluetooth functionality) [https://www.amazon.com/EDUP-3000Mbps-802-11AX-Bluetooth-EP-AC9636GS/dp/B082F5D4SM]<br />
** Intel Optane SSD: SSDPED1D480GASX [https://www.amazon.com/Intel-Optane-900P-480GB-XPoint/dp/B0772T4BVZ]<br />
** Zotac GeForce GT 710: ZT-71304-20L (this one does not seem to be available on amazon anymore, a shame since its one of the few high performance PCIEx1 cards...) [https://www.amazon.com/ZOTAC-GeForce-Profile-Graphic-ZT-71304-20L/dp/B01E9Z2D60]<br />
* none of the proprietary gigabyte software works, in fact, it blue screens windows and installs itself as a startup program, forever locking you out.<br />
* if anyone else uses this exact motherboard, there are two internal USB IOMMU groups, even with the ACS patch. one will include the usb labeled "USB 3.1", and the other will include all the other USBs. this means if you want more than just a keyboard and mouse, you will need either a usb hub to plug into the 3.1 slot and passthrough, or a PCIE USB bus.<br />
* the two ethernet ports are in different IOMMU groups, making this a perfect motherboard for vfio.<br />
* the ACS patch is needed on this motherboard if you want to use two graphics cards at once in seperate IOMMU groups. this sets the main GPU to PCIEx8 instead of x16.<br />
<br />
=== Roobre's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (OC'ed to 4.50)<br />
* '''Motherboard''': ASUS ROG MAXIMUS VIII GENE, v3801<br />
* '''GPU''': EVGA GTX 1080Ti<br />
* '''RAM''': 32GB DDR4 2400 (2x Ballistix)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Latest -ARCH or -zen (4.17.10-1-zen at the time of writing)<br />
* Using '''libvirt/QEMU''': libvirt 4.5.0-1, qemu 2.12.0-2. Config: https://gist.github.com/roobre/d2d20cc638c5030f360b500000da0f88{{Dead link|2020|02|25}}<br />
* '''ZFS''' volumes passed as raw devices for hard drives.<br />
* '''VirtIO all the things!''' Download drivers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/<br />
<br />
Issues: <br />
<br />
* Pulseaudio never worked good (too much crackling), so I ended up passing-through an USB 3.1 PCI controller and connecting an USB audio card to it. That card is then connected to one of my MoBo's inputs, and echoed using pulseaudio's `loopback` module.<br />
<br />
* Synergy works really great. On some games (ones who take control of the mouse pointer, e.g. first-person), you need to lock the mouse cursor to the VM window to avoid issues (camera moving too fast).<br />
<br />
* Do not forget to add the needed snippet for the nvidia driver to run ([[PCI passthrough via OVMF#"Error 43: Driver failed to load" on Nvidia GPUs passed to Windows VMs]])<br />
<br />
=== laenco's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 9 3950X @ 4.15Ghz all-cores via PBO<br />
* '''Motherboard''': Asus ROG STRIX X470-F GAMING (BIOS/UEFI Version: 5406)<br />
* '''GPU1 (Guest)''': Palit GeForce GTX 1080 8GB @ Stock<br />
* '''GPU2 (Host)''': MSI RX 570 8GB @ Stock<br />
* '''RAM''': 4 x 16GB (64GB) @ 3333 MHz<br />
<br />
Configuration:<br />
<br />
* '''Guest OS''': Windows 10 Pro<br />
* '''Kernel''': 5.4.13-arch1-1-gc (-ck is also good). No ACS patch.<br />
* Using vanilla '''QEMU 4.2.0'''<br />
* AMD Ryzen currently (2020.01.20) got bugged with smp threads option - VM stuck on start.<br />
* Got classic Nvidia error 43 - classically fixed. But also added some cpu flags which are set automatically with kvm=on found here https://github.com/qemu/qemu/blob/master/target/i386/cpu.c#L4008<br />
* As pure qemu have no option to pin cpu cores and self threads - using python script "cpu_affinity" - credits to https://github.com/zegelin/qemu-affinity/ and also a copy in my repo. Requires debug-threads=on<br />
* Using dynamically allocated hugepages 2Mb<br />
* Hardly using VirtIO<br />
* Using hardware usb switch like Aten US224-AT and hdmi switch "many-to-one", which allow me to have one monitor, mouse, keyboard and some usb devices, and switch them by button between host and guest.<br />
* Repo with current major system config and script for VM could be found here https://github.com/laenco/vfio-config<br />
<br />
=== Poncho's VFIO setup ===<br />
<br />
'''Hardware:'''<br />
<br />
* '''CPU''': Ryzen 7 2700x @ stock (PBO)<br />
* '''Motherboard''': MSI B450-A PRO MAX (BIOS/UEFI version: 7B86vM5)<br />
* '''GPU1 (Guest)''': MSI GeForce GTX 1660 Ti Gaming X 6GB @ Stock<br />
* '''GPU2 (Host)''': AsRock RX 570 8GB @ Stock<br />
* '''RAM''': 2 x 16GB @ (currently) 2666MHz<br />
<br />
'''Configuration:'''<br />
<br />
* '''Guest OS''': Windows 10 Home<br />
* '''Kernel''': 5.4.17-1-MANJARO vanilla, no ACS patch<br />
* '''libvirt 5.10.0/QEMU 4.2.0''': [https://gist.github.com/jp1995/7427b00eae14aba91a6ee2ab0d17df0a/ win10.xml gist]<br />
<br />
'''Issues I have encountered:'''<br />
<br />
The main issue that plagued me for a while was stuttering / heavy performance loss while simultaneously running processes (read 30 firefox tabs and a twitch stream) on the host. I also had crashes. The crashes were occurring more often in more demanding games, and less often when the host was as idle as possible. I finally solved this by changing my RAM speed from 3466MHz to 2666MHz. I have had no crashes for 2 days of gaming and the performance loss when using the host is also less significant. I'll try slowly bumping the RAM speed back up step by step to find the point of instability and I'll edit this once I've found it.<br />
<br />
'''Describing setup loosely:''' <br />
<br />
* On the hardware side, my 620 Watt PSU is perfectly adequate, despite some early concerns. <br />
* 16 PCI lanes for the Guest card, 4 for the Host card. 8+8 is also a solution but I haven't had the need to try this.<br />
* Regarding the VM setup, I pinned and isolated 12 logical processors, leaving 4 to the host. The isolation was achieved using [https://rokups.github.io/#!pages/gaming-vm-performance.md/ these scripts.] I needed the git version of cpuset for it to work. The pinning alone didn't change performance at all.<br />
* Audio passthrough is done through the usual pulseaudio solution, I have no demonic interference, works almost perfectly. I have to plug my headset directly into the VM when I want my mic to not sound garbage. ICH9.<br />
* I did try enabling MSI on the GPU in an attempt to fix the crashes described above, but all I got was a small but significant reduction in performance. <br />
* Regarding input, I got a bit lucky. My motherboard has two USB 3 ports all alone in a single IOMMU group. I got a 4 port USB switch and the only complaint I have with it is that sometimes it doesn't pick up my mouse when switching back to the host<br />
* No trouble at all getting the NVIDIA gpu to run in a VM, used the general solution in the wiki, including <kvm><hidden state='on'/></kvm><br />
* As for storage, I just gave the VM a whole raw SATA SSD. Benchmarking shows about a 50% performance drop, but I haven't really noticed significantly longer loading times in games. In the future I might try reinstalling windows on a virtual image for cloning purposes and use the SSD as a game drive.<br />
* All in all, there is about a 10% performance loss in CPU intensive games, compared to bare metal. This is acceptable and I'm pretty happy with the system :)<br />
<br />
=== zane's not working box ===<br />
<br />
Hardware:<br />
<br />
* '''MacBook Pro 11,x''' (2014 Model)<br />
* '''CPU''': Intel Core i7-4770HQ<br />
* '''Motherboard''': Apple<br />
* '''GPU''': Iris Pro 5200 for host, GTX 1660 eGPU over Thunderbolt 2 for guest<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-vfio from aur 5.5.8<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* '''ovmf''': 1:r26976.bd85bf54c2<br />
* '''libvirt/QEMU''': [https://gist.github.com/xzn/ef338049c91d21e9c1900982b21d9d32 libvirt setup]; [https://gist.github.com/xzn/06760e0e7df6ca325d0f05979aeff3bd qemu setup]<br />
<br />
Description:<br />
* The qemu script include lines for setting up device mapped file for raw disk access. 3D Performace is about 40% to 80% native depending on the application, with periodic lag spike/stutter.<br />
<br />
Issues:<br />
* Use [https://github.com/0xbb/apple_set_os.efi apple_set_os.efi] or {{ic|spoof_osx_version}} with [https://www.rodsbooks.com/refind/configfile.html refind] to avoid black screen on start. This prevents Apple firmware from shutting down host iGPU when booting Linux/Windows.<br />
* CPU pinning for guest is mandatory as it removes majority of stutters. After that isolate host CPU cores and pin emulator/IO threads as well. [https://github.com/PiMaker/Win10-VFIO/blob/master/qemu_fifo.sh Pi's script] for pinning IRQ handlers also helps. Hugepages for memory helps.<br />
* Kernel parameters: {{ic|1=intel_iommu=on iommu=pt pcie_acs_override=downstream pci=realloc vfio-pci.ids=10de:2184,10de:1aeb,10de:1aec,10de:1aed,8086:0d01,8086:156d,8086:156c isolcpus=0-5 nohz_full=0-5 rcu_nocbs=0-5 default_hugepagesz=1G hugepagesz=1G hugepages=12 mitigations=off pcie_aspm=off module_blacklist=nvidia audit=0 loglevel=3 quiet}}. Everything starting with {{ic|1=mitigations=off}} are optional. {{ic|1=pci=realloc}} is mandatory or you will get {{ic|NVRM: This PCI I/O region assigned to your NVIDIA device is invalid: NVRM: BAR1 is 0M @ 0x0 (PCI:0000:0a:00.0)}} error in dmesg and Error 43 for the Nvidia driver in guest.<br />
* Add {{ic|vfio_pci vfio vfio_iommu_type1 vfio_virqfd}} to your {{ic|mkinitcpio.conf}} as normal. Add {{ic|1=options kvm ignore_msrs=1}} and {{ic|1=options kvm report_ignored_msrs=N}} to your {{ic|/etc/modprobe.d/kvm.conf}} as well.<br />
* For me ACSO patch is mandatory, available from linux-vfio aur.<br />
* Enabling MSI for guest GPU seemingly helps. Using {{ic|ioh3420}} device and passthrough GPU on top of that DOES NOT seem to help, while making PulseAudio output cracks badly. Setting {{ic|1=mixing-engine=off}} for PulseAudio also makes it cracks badly so consider USB soundcard if needed. (I personally use the sound out on my monitor from guest). While I'm not sure what this option does, setting {{ic|in.buffer-length}} on PulseAudio audiodev reduces cracks.<br />
<br />
=== Muata's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7 4790<br />
* '''Motherboard''': MSI B85M-G43 BIOS/UEFI Version: V3.9 (03/30/2015)<br />
* '''GPU''': NVIDIA GeForce GTX 1060 6GB (MSI Gaming+)<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-zen 5.5.11-1<br />
* Using '''libvirt/QEMU''': [https://github.com/Muata/VFIO VFIO setup];<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* No issues at moment of writing this.<br />
<br />
> I had some issues with the network, for example, I couldn't connect to Activision games servers (CoD: MW, Overwatch) but I've changed firewall settings from public to private and everything is good for now.<br />
<br />
> For the first time, I had windows on .raw image and disk was throttling a lot, I've set up raid0 on my 2 HDD's, then I created 3 partitions with LVM - 120GB for windows, 700GB for data(games), 700gb for Linux data and passthrough two of partition as Virtio-BLK. [https://wiki.archlinux.org/index.php/Software_RAID_and_LVM RAID&LVM]<br />
<br />
> Audio passthrough is done through the usual PulseAudio solution, works nicely.<br />
<br />
> For some people who, maybe looking how to passthrough GPU - because it's not obvious when you doing it for the first time and it's not on the wiki though, so when you pass a correct group of vfio-pci.ids then you need to add in (easiest way) a Virtual Machine Manager - Add hardware - PCI Host Device - You graphic card (for me it was 0000:01:00:0 NVIDIA Corporation GP106 [GeForce GTX 1060 6GB]).<br />
<br />
== Adding your own setup ==<br />
<br />
Add a new section with your nickname, CPU, motherboard and GPU models, then copy and paste this template to your section:<br />
<br />
{{bc|<nowiki><br />
Hardware:<br />
<br />
* '''CPU''': <br />
* '''Motherboard''': (Revision , BIOS/UEFI Version: )<br />
* '''GPU''': <br />
* '''RAM''': <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version (vanilla/CK/Zen/ACS-patched or not).<br />
* Using '''libvirt/QEMU''': link to domain XMLs/scripts/notes (Git repo preferred).<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
* Describe your setup loosely here, so that when other wiki users are looking for something, they can easily skim through available setups.<br />
</nowiki>}}<br />
<br />
Replace proper sections with your own data. Make sure to provide the exact motherboard model, revision (if possible - should be on both the motherboard itself and the box it came in) and BIOS/UEFI version you are using. Describe your exact software setup and add a link to your configuration files. (GitHub, GitLab, BitBucket, etc can host a public repository which you may update once in a while, but uploading them to pastebins is fine, too. '''Do not''' post the entire config file contents here.)</div>Muatahttps://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF&diff=602354PCI passthrough via OVMF2020-03-22T18:40:44Z<p>Muata: /* Passing though other devices */ update from though to through</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[ja:OVMF による PCI パススルー]]<br />
[[zh-hans:PCI passthrough via OVMF]]<br />
{{Related articles start}}<br />
{{Related|Intel GVT-g}}<br />
{{Related articles end}}<br />
<br />
The Open Virtual Machine Firmware ([https://github.com/tianocore/tianocore.github.io/wiki/OVMF OVMF]) is a project to enable UEFI support for virtual machines. Starting with Linux 3.9 and recent versions of [[QEMU]], it is now possible to passthrough a graphics card, offering the VM native graphics performance which is useful for graphic-intensive tasks.<br />
<br />
Provided you have a desktop computer with a spare GPU you can dedicate to the host (be it an integrated GPU or an old OEM card, the brands do not even need to match) and that your hardware supports it (see [[#Prerequisites]]), it is possible to have a VM of any OS with its own dedicated GPU and near-native performance. For more information on techniques see the background [https://www.linux-kvm.org/images/b/b3/01x09b-VFIOandYou-small.pdf presentation (pdf)]. <br />
<br />
== Prerequisites ==<br />
<br />
A VGA Passthrough relies on a number of technologies that are not ubiquitous as of today and might not be available on your hardware. You will not be able to do this on your machine unless the following requirements are met :<br />
<br />
* Your CPU must support hardware virtualization (for kvm) and IOMMU (for the passthrough itself)<br />
** [https://ark.intel.com/Search/FeatureFilter?productType=873&0_VTD=True List of compatible Intel CPUs (Intel VT-x and Intel VT-d)]<br />
** All AMD CPUs from the Bulldozer generation and up (including Zen) should be compatible.<br />
*** CPUs from the K10 generation (2007) do not have an IOMMU, so you '''need''' to have a motherboard with a [https://support.amd.com/TechDocs/43403.pdf#page=18 890FX] or [https://support.amd.com/TechDocs/48691.pdf#page=21 990FX] chipset to make it work, as those have their own IOMMU.<br />
* Your motherboard must also support IOMMU<br />
** Both the chipset and the BIOS must support it. It is not always easy to tell at a glance whether or not this is the case, but there is a fairly comprehensive list on the matter on the [https://wiki.xen.org/wiki/VTd_HowTo Xen wiki] as well as [[Wikipedia:List of IOMMU-supporting hardware]].<br />
* Your guest GPU ROM must support UEFI.<br />
** If you can find [https://www.techpowerup.com/vgabios/ any ROM in this list] that applies to your specific GPU and is said to support UEFI, you are generally in the clear. All GPUs from 2012 and later should support this, as Microsoft made UEFI a requirement for devices to be marketed as compatible with Windows 8.<br />
<br />
You will probably want to have a spare monitor or one with multiple input ports connected to different GPUs (the passthrough GPU will not display anything if there is no screen plugged in and using a VNC or Spice connection will not help your performance), as well as a mouse and a keyboard you can pass to your VM. If anything goes wrong, you will at least have a way to control your host machine this way.<br />
<br />
== Setting up IOMMU ==<br />
<br />
{{Note|<br />
* IOMMU is a generic name for Intel VT-d and AMD-Vi.<br />
* VT-d stands for ''Intel Virtualization Technology for Directed I/O'' and should not be confused with VT-x ''Intel Virtualization Technology''. VT-x allows one hardware platform to function as multiple “virtual” platforms while VT-d improves security and reliability of the systems and also improves performance of I/O devices in virtualized environments.<br />
}}<br />
<br />
Using IOMMU opens to features like PCI passthrough and memory protection from faulty or malicious devices, see [[Wikipedia:Input-output memory management unit#Advantages]] and [https://www.quora.com/Memory-Management-computer-programming/Could-you-explain-IOMMU-in-plain-English Memory Management (computer programming): Could you explain IOMMU in plain English?].<br />
<br />
=== Enabling IOMMU ===<br />
<br />
Ensure that AMD-Vi/Intel VT-d is supported by the CPU and enabled in the BIOS settings. Both normally show up alongside other CPU features (meaning they could be in an overclocking-related menu) either with their actual names ("VT-d" or "AMD-Vi") or in more ambiguous terms such as "Virtualization technology", which may or may not be explained in the manual.<br />
<br />
Enable IOMMU support by setting the correct [[kernel parameter]] depending on the type of CPU in use:<br />
<br />
* For Intel CPUs (VT-d) set {{ic|1=intel_iommu=on}} <br />
* For AMD CPUs (AMD-Vi) set {{ic|1=amd_iommu=on}}<br />
<br />
You should also append the {{ic|1=iommu=pt}} parameter. This will prevent Linux from touching devices which cannot be passed through.<br />
<br />
After rebooting, check dmesg to confirm that IOMMU has been correctly enabled:<br />
<br />
{{hc|dmesg {{!}} grep -i -e DMAR -e IOMMU|<br />
[ 0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL BDW 00000001 INTL 00000001)<br />
[ 0.000000] Intel-IOMMU: enabled<br />
[ 0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a<br />
[ 0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da<br />
[ 0.028950] IOAPIC id 8 under DRHD base 0xfed91000 IOMMU 1<br />
[ 0.536212] DMAR: No ATSR found<br />
[ 0.536229] IOMMU 0 0xfed90000: using Queued invalidation<br />
[ 0.536230] IOMMU 1 0xfed91000: using Queued invalidation<br />
[ 0.536231] IOMMU: Setting RMRR:<br />
[ 0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]<br />
[ 0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC<br />
[ 0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]<br />
[ 2.182790] [drm] DMAR active, disabling use of stolen memory<br />
}}<br />
<br />
=== Ensuring that the groups are valid ===<br />
<br />
The following script should allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.<br />
<br />
#!/bin/bash<br />
shopt -s nullglob<br />
for g in /sys/kernel/iommu_groups/*; do<br />
echo "IOMMU Group ${g##*/}:"<br />
for d in $g/devices/*; do<br />
echo -e "\t$(lspci -nns ${d##*/})"<br />
done;<br />
done;<br />
<br />
Example output:<br />
<br />
IOMMU Group 1:<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
IOMMU Group 2:<br />
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:0e31] (rev 04)<br />
IOMMU Group 4:<br />
00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:0e2d] (rev 04)<br />
IOMMU Group 10:<br />
00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:0e26] (rev 04)<br />
IOMMU Group 13:<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
<br />
An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together. The frontal USB controller, however, has its own group (group 2) which is separate from both the USB expansion controller (group 10) and the rear USB controller (group 4), meaning that [[#USB controller|any of them could be passed to a VM without affecting the others]].<br />
<br />
=== Gotchas ===<br />
<br />
==== Plugging your guest GPU in an unisolated CPU-based PCIe slot ====<br />
<br />
Not all PCI-E slots are the same. Most motherboards have PCIe slots provided by both the CPU and the PCH. Depending on your CPU, it is possible that your processor-based PCIe slot does not support isolation properly, in which case the PCI slot itself will appear to be grouped with the device that is connected to it.<br />
<br />
IOMMU Group 1:<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)<br />
01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750] (rev a2)<br />
01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)<br />
<br />
This is fine so long as only your guest GPU is included in here, such as above. Depending on what is plugged in to your other PCIe slots and whether they are allocated to your CPU or your PCH, you may find yourself with additional devices within the same group, which would force you to pass those as well. If you are ok with passing everything that is in there to your VM, you are free to continue. Otherwise, you will either need to try and plug your GPU in your other PCIe slots (if you have any) and see if those provide isolation from the rest or to install the ACS override patch, which comes with its own drawbacks. See [[#Bypassing the IOMMU groups (ACS override patch)]] for more information.<br />
<br />
{{Note|If they are grouped with other devices in this manner, pci root ports and bridges should neither be bound to vfio at boot, nor be added to the VM.}}<br />
<br />
== Isolating the GPU ==<br />
<br />
In order to assign a device to a virtual machine, this device and all those sharing the same IOMMU group must have their driver replaced by a stub driver or a VFIO driver in order to prevent the host machine from interacting with them. In the case of most devices, this can be done on the fly right before the VM starts.<br />
<br />
However, due to their size and complexity, GPU drivers do not tend to support dynamic rebinding very well, so you cannot just have some GPU you use on the host be transparently passed to a VM without having both drivers conflict with each other. Because of this, it is generally advised to bind those placeholder drivers manually before starting the VM, in order to stop other drivers from attempting to claim it.<br />
<br />
The following section details how to configure a GPU so those placeholder drivers are bound early during the boot process, which makes said device inactive until a VM claims it or the driver is switched back. This is the preferred method, considering it has less caveats than switching drivers once the system is fully online.<br />
<br />
{{Warning|Once you reboot after this procedure, whatever GPU you have configured will no longer be usable on the host until you reverse the manipulation. Make sure the GPU you intend to use on the host is properly configured before doing this - your motherboard should be set to display using the host GPU.}}<br />
<br />
Starting with Linux 4.1, the kernel includes vfio-pci. This is a VFIO driver, meaning it fulfills the same role as pci-stub did, but it can also control devices to an extent, such as by switching them into their D3 state when they are not in use.<br />
<br />
=== Binding vfio-pci via device ID ===<br />
<br />
Vfio-pci normally targets PCI devices by ID, meaning you only need to specify the IDs of the devices you intend to passthrough. For the following IOMMU group, you would want to bind vfio-pci with {{ic|10de:13c2}} and {{ic|10de:0fbb}}, which will be used as example values for the rest of this section.<br />
<br />
IOMMU Group 13:<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)}}<br />
<br />
{{Note|<br />
* You cannot specify which device to isolate using vendor-device ID pairs if the host GPU and the guest GPU share the same pair (i.e : if both are the same model). If this is your case, read [[#Using identical guest and host GPUs]] instead.<br />
* If, as noted in [[#Plugging your guest GPU in an unisolated CPU-based PCIe slot]], your pci root port is part of your IOMMU group, you '''must not''' pass its ID to {{ic|vfio-pci}}, as it needs to remain attached to the host to function properly. Any other device within that group, however, should be left for {{ic|vfio-pci}} to bind with.<br />
}}<br />
<br />
Two methods exist for providing the device IDs. Specifying them via [[kernel parameters]] has the advantage of being able to easily edit, remove, or undo any breaking changes via your boot loader:<br />
<br />
vfio-pci.ids=10de:13c2,10de:0fbb<br />
<br />
Alternatively, the IDs may be added to a modprobe conf file. Since these conf files are embedded in the initramfs image, any changes require regenerating a new image each time:<br />
<br />
{{hc|/etc/modprobe.d/vfio.conf|2=<br />
options vfio-pci ids=10de:13c2,10de:0fbb<br />
}}<br />
<br />
=== Loading vfio-pci early ===<br />
<br />
Since Arch's {{Pkg|linux}} has vfio-pci built as a module, we need to force it to load early before the graphics drivers have a chance to bind to the card. To ensure that, add {{ic|vfio_pci}}, {{ic|vfio}}, {{ic|vfio_iommu_type1}}, and {{ic|vfio_virqfd}} to [[mkinitcpio]]:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(... vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)<br />
}}<br />
<br />
{{Note|If you also have another driver loaded this way for [[Kernel mode setting#Early KMS start|early modesetting]] (such as {{ic|nouveau}}, {{ic|radeon}}, {{ic|amdgpu}}, {{ic|i915}}, etc.), all of the aforementioned VFIO modules must precede it.}}<br />
<br />
Also, ensure that the modconf hook is included in the HOOKS list of {{ic|mkinitcpio.conf}}:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
HOOKS=(... modconf ...)<br />
}}<br />
<br />
Since new modules have been added to the initramfs configuration, you must [[regenerate the initramfs]].<br />
<br />
=== Verifying that the configuration worked ===<br />
<br />
Reboot and verify that vfio-pci has loaded properly and that it is now bound to the right devices.<br />
<br />
{{hc|$ dmesg {{!}} grep -i vfio|<br />
[ 0.329224] VFIO - User Level meta-driver version: 0.3<br />
[ 0.341372] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000<br />
[ 0.354704] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000<br />
[ 2.061326] vfio-pci 0000:06:00.0: enabling device (0100 -> 0103)<br />
}}<br />
<br />
It is not necessary for all devices (or even expected device) from {{ic|vfio.conf}} to be in dmesg output. Sometimes a device does not appear in output at boot but actually is able to be visible and operatable in guest VM.<br />
<br />
{{hc|$ lspci -nnk -d 10de:13c2|<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: nouveau nvidia<br />
}}<br />
<br />
{{hc|$ lspci -nnk -d 10de:0fbb|<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: snd_hda_intel<br />
}}<br />
<br />
== Setting up an OVMF-based guest VM ==<br />
<br />
OVMF is an open-source UEFI firmware for QEMU virtual machines. While it is possible to use SeaBIOS to get similar results to an actual PCI passthough, the setup process is different and it is generally preferable to use the EFI method if your hardware supports it.<br />
<br />
=== Configuring libvirt ===<br />
<br />
[[Libvirt]] is a wrapper for a number of virtualization utilities that greatly simplifies the configuration and deployment process of virtual machines. In the case of KVM and QEMU, the frontend it provides allows us to avoid dealing with the permissions for QEMU and make it easier to add and remove various devices on a live VM. Its status as a wrapper, however, means that it might not always support all of the latest qemu features, which could end up requiring the use of a wrapper script to provide some extra arguments to QEMU.<br />
<br />
Install {{Pkg|qemu}}, {{Pkg|libvirt}}, {{Pkg|ovmf}}, and {{Pkg|virt-manager}}.<br />
<br />
You can now [[enable]] and [[start]] {{ic|libvirtd.service}} and its logging component {{ic|virtlogd.socket}}.<br />
<br />
You may also need to [https://wiki.libvirt.org/page/Networking#NAT_forwarding_.28aka_.22virtual_networks.22.29 activate the default libvirt network].<br />
<br />
=== Setting up the guest OS ===<br />
<br />
The process of setting up a VM using {{ic|virt-manager}} is mostly self-explanatory, as most of the process comes with fairly comprehensive on-screen instructions.<br />
<br />
If using {{ic|virt-manager}}, you have to add your user to the {{ic|libvirt}} [[user group]] to ensure authentication.<br />
<br />
However, you should pay special attention to the following steps:<br />
<br />
* When the VM creation wizard asks you to name your VM (final step before clicking "Finish"), check the "Customize before install" checkbox.<br />
* In the "Overview" section, [https://i.imgur.com/73r2ctM.png set your firmware to "UEFI"]. If the option is grayed out, make sure that:<br />
** Your hypervisor is running as a system session and not a user session. This can be verified [https://i.ibb.co/N1XZCdp/Deepin-Screenshot-select-area-20190125113216.png by clicking, then hovering] over the session in virt-manager. If you are accidentally running it as a user session, you must open a new connection by clicking "File" > "Add Connection..", then select the option from the drop-down menu station "QEMU/KVM" and not "QEMU/KVM user session".<br />
* In the "CPUs" section, change your CPU model to "host-passthrough". If it is not in the list, you will have to type it by hand. This will ensure that your CPU is detected properly, since it causes libvirt to expose your CPU capabilities exactly as they are instead of only those it recognizes (which is the preferred default behavior to make CPU behavior easier to reproduce). Without it, some applications may complain about your CPU being of an unknown model.<br />
* If you want to minimize IO overhead, go into "Add Hardware" and add a Controller for SCSI drives of the "VirtIO SCSI" model. You can then change the default IDE disk for a SCSI disk, which will bind to said controller.<br />
** Windows VMs will not recognize those drives by default, so you need to download the ISO containing the drivers from [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/ here] and add an IDE (or SATA for Windows 8.1 and newer) CD-ROM storage device linking to said ISO, otherwise you will not be able to get Windows to recognize it during the installation process. When prompted to select a disk to install windows on, load the drivers contained on the CD-ROM under ''vioscsi''.<br />
<br />
The rest of the installation process will take place as normal using a standard QXL video adapter running in a window. At this point, there is no need to install additional drivers for the rest of the virtual devices, since most of them will be removed later on. Once the guest OS is done installing, simply turn off the virtual machine. It is possible you will be dropped into the UEFI menu instead of starting the installation upon powering your VM for the first time. Sometimes the correct ISO file was not automatically detected and you will need to manually specify the drive to boot. By typing exit and navigating to "boot manager" you will enter a menu that allows you to choose between devices.<br />
<br />
=== Attaching the PCI devices ===<br />
<br />
With the installation done, it is now possible to edit the hardware details in libvirt and remove virtual integration devices, such as the spice channel and virtual display, the QXL video adapter, the emulated mouse and keyboard and the USB tablet device. Since that leaves you with no input devices, you may want to bind a few USB host devices to your VM as well, but remember to '''leave at least one mouse and/or keyboard assigned to your host''' in case something goes wrong with the guest. At this point, it also becomes possible to attach the PCI device that was isolated earlier; simply click on "Add Hardware" and select the PCI Host Devices you want to passthrough. If everything went well, the screen plugged into your GPU should show the OVMF splash screen and your VM should start up normally. From there, you can setup the drivers for the rest of your VM.<br />
<br />
=== Passing keyboard/mouse via Evdev ===<br />
<br />
If you do not have a spare mouse or keyboard to dedicate to your guest, and you do not want to suffer from the video overhead of Spice, you can setup evdev to swap control of your mouse and keyboard between the host and guest on the fly.<br />
<br />
First, modify the libvirt configuration<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm'><br />
}}<br />
<br />
to<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm' xmlns:qemu<nowiki>='http://libvirt.org/schemas/domain/qemu/1.0'</nowiki>><br />
}}<br />
<br />
Next, find your keyboard and mouse devices in {{ic|/dev/input/by-id/}}. You may find multiple devices associated to your mouse or keyboard, so try {{ic|cat /dev/input/by-id/''device_id''}} and either hit some keys on the keyboard or wiggle your mouse to see if input comes through, if so you have got the right device. Now add those devices to your configuration right before the closing </domain> tag:<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<qemu:commandline><br />
<qemu:arg value='-object'/><br />
<qemu:arg value='input-linux,id=mouse1,evdev=/dev/input/by-id/MOUSE_NAME'/><br />
<qemu:arg value='-object'/><br />
<qemu:arg value='input-linux,id=kbd1,evdev=/dev/input/by-id/KEYBOARD_NAME,grab_all=on,repeat=on'/><br />
</qemu:commandline><br />
...<br />
</nowiki>}}<br />
<br />
Replace {{ic|MOUSE_NAME}} and {{ic|KEYBOARD_NAME}} with your device id. You will also need to include these devices in your qemu config, and setting the user and group to one that has access to your input devices:<br />
<br />
{{hc|/etc/libvirt/qemu.conf|<nowiki><br />
...<br />
user = "<your_user>"<br />
group = "kvm"<br />
...<br />
cgroup_device_acl = [<br />
"/dev/kvm",<br />
"/dev/input/by-id/KEYBOARD_NAME",<br />
"/dev/input/by-id/MOUSE_NAME",<br />
"/dev/null", "/dev/full", "/dev/zero",<br />
"/dev/random", "/dev/urandom",<br />
"/dev/ptmx", "/dev/kvm", "/dev/kqemu",<br />
"/dev/rtc","/dev/hpet", "/dev/sev"<br />
]<br />
...<br />
</nowiki>}}<br />
<br />
Then ensure that the user you provided has access to the {{ic|kvm}} and {{ic|input}} [[user group]]s. [[Restart]] {{ic|libvirtd.service}}. Now you can startup the guest OS and test swapping control of your mouse and keyboard between the host and guest by pressing both the left and right control keys at the same time.<br />
<br />
You may also consider switching from PS/2 to Virtio inputs in your configurations:<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<input type='mouse' bus='virtio'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x0e' function='0x0'/><br />
</input><br />
<input type='keyboard' bus='virtio'><br />
<address type='pci' domain='0x0000' bus='0x00' slot='0x0f' function='0x0'/><br />
</input><br />
<input type='mouse' bus='ps2'/><br />
<input type='keyboard' bus='ps2'/><br />
...<br />
</nowiki>}}<br />
<br />
Next startup the guest OS and install the virtIO drivers for those devices.<br />
<br />
=== Gotchas ===<br />
<br />
==== Using a non-EFI image on an OVMF-based VM ====<br />
<br />
The OVMF firmware does not support booting off non-EFI mediums. If the installation process drops you in a UEFI shell right after booting, you may have an invalid EFI boot media. Try using an alternate Linux/Windows image to determine if you have an invalid media.<br />
<br />
== Performance tuning ==<br />
<br />
Most use cases for PCI passthroughs relate to performance-intensive domains such as video games and GPU-accelerated tasks. While a PCI passthrough on its own is a step towards reaching native performance, there are still a few ajustments on the host and guest to get the most out of your VM.<br />
<br />
=== CPU pinning ===<br />
<br />
The default behavior for KVM guests is to run operations coming from the guest as a number of threads representing virtual processors. Those threads are managed by the Linux scheduler like any other thread and are dispatched to any available CPU cores based on niceness and priority queues. As such, the local CPU cache benefits (L1/L2) are lost each time the host scheduler reschedules the virtual CPU thread on a different physical CPU. This can noticeably harm performance on the guest. CPU pinning aims to resolve this by limiting which physical CPUs the virtual CPUs are allowed to run on. The ideal setup is a one to one mapping such that the virtual CPU cores match physical CPU cores while taking hyperthreading/SMT into account.<br />
<br />
{{Note|For certain users enabling CPU pinning may introduce stuttering and short hangs, especially with the MuQSS scheduler (present in linux-ck and linux-zen kernels). You might want to try disabling pinning first if you experience similar issues, which effectively trades maximum performance for responsiveness at all times.}}<br />
<br />
==== CPU topology ====<br />
<br />
Most modern CPUs support hardware multitasking, also known as hyper-threading on Intel CPUs or SMT on AMD CPUs. Hyper-threading/SMT is simply a very efficient way of running two threads on one CPU core at any given time. You will want to take into consideration that the CPU pinning you choose will greatly depend on what you do with your host while your VM is running.<br />
<br />
To find the topology for your CPU run {{ic|1=lscpu -e}}:<br />
<br />
{{Note|Pay special attention to the 4th column '''"CORE"''' as this shows the association of the Physical/Logical CPU cores}}<br />
<br />
{{ic|lscpu -e}} on a 6c/12t Ryzen 5 1600:<br />
<br />
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ<br />
0 0 0 0 0:0:0:0 yes 3800.0000 1550.0000<br />
1 0 0 0 0:0:0:0 yes 3800.0000 1550.0000<br />
2 0 0 1 1:1:1:0 yes 3800.0000 1550.0000<br />
3 0 0 1 1:1:1:0 yes 3800.0000 1550.0000<br />
4 0 0 2 2:2:2:0 yes 3800.0000 1550.0000<br />
5 0 0 2 2:2:2:0 yes 3800.0000 1550.0000<br />
6 0 0 3 3:3:3:1 yes 3800.0000 1550.0000<br />
7 0 0 3 3:3:3:1 yes 3800.0000 1550.0000<br />
8 0 0 4 4:4:4:1 yes 3800.0000 1550.0000<br />
9 0 0 4 4:4:4:1 yes 3800.0000 1550.0000<br />
10 0 0 5 5:5:5:1 yes 3800.0000 1550.0000<br />
11 0 0 5 5:5:5:1 yes 3800.0000 1550.0000<br />
<br />
{{Note|Ryzen 3000 ComboPi AGESA changes topology to match Intel example, even on prior generation CPUs. Above valid only on older AGESA. }}<br />
<br />
{{ic|lscpu -e}} on a 6c/12t Intel 8700k:<br />
<br />
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ<br />
0 0 0 0 0:0:0:0 yes 4600.0000 800.0000<br />
1 0 0 1 1:1:1:0 yes 4600.0000 800.0000<br />
2 0 0 2 2:2:2:0 yes 4600.0000 800.0000<br />
3 0 0 3 3:3:3:0 yes 4600.0000 800.0000<br />
4 0 0 4 4:4:4:0 yes 4600.0000 800.0000<br />
5 0 0 5 5:5:5:0 yes 4600.0000 800.0000<br />
6 0 0 0 0:0:0:0 yes 4600.0000 800.0000<br />
7 0 0 1 1:1:1:0 yes 4600.0000 800.0000<br />
8 0 0 2 2:2:2:0 yes 4600.0000 800.0000<br />
9 0 0 3 3:3:3:0 yes 4600.0000 800.0000<br />
10 0 0 4 4:4:4:0 yes 4600.0000 800.0000<br />
11 0 0 5 5:5:5:0 yes 4600.0000 800.0000<br />
<br />
As we see above, with AMD '''Core 0''' is sequential with '''CPU 0 & 1''', whereas Intel places '''Core 0''' on '''CPU 0 & 6'''.<br />
<br />
If you do not need all cores for the guest, it would then be preferable to leave at the very least one core for the host. Choosing which cores one to use for the host or guest should be based on the specific hardware characteristics of your CPU, however '''Core 0''' is a good choice for the host in most cases. If any cores are reserved for the host, it is recommended to pin the emulator and iothreads, if used, to the host cores rather than the VCPUs. This may improve performance and reduce latency for the guest since those threads will not pollute the cache or contend for scheduling with the guest VCPU threads. If all cores are passed to the guest, there is no need or benefit to pinning the emulator or iothreads.<br />
<br />
==== XML examples ====<br />
<br />
{{Note|Do not use the '''iothread''' lines from the XML examples shown below if you have not added an '''iothread''' to your disk controller. '''iothread''''s only work on '''virtio-scsi''' or '''virtio-blk''' devices.}}<br />
<br />
===== 4c/1t CPU w/o Hyperthreading Example =====<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<vcpu placement='static'>4</vcpu><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='0'/><br />
<vcpupin vcpu='1' cpuset='1'/><br />
<vcpupin vcpu='2' cpuset='2'/><br />
<vcpupin vcpu='3' cpuset='3'/><br />
</cputune><br />
...<br />
</nowiki>}}<br />
<br />
===== 4c/2t Intel CPU pinning example =====<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<vcpu placement='static'>8</vcpu><br />
<iothreads>1</iothreads><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='2'/><br />
<vcpupin vcpu='1' cpuset='8'/><br />
<vcpupin vcpu='2' cpuset='3'/><br />
<vcpupin vcpu='3' cpuset='9'/><br />
<vcpupin vcpu='4' cpuset='4'/><br />
<vcpupin vcpu='5' cpuset='10'/><br />
<vcpupin vcpu='6' cpuset='5'/><br />
<vcpupin vcpu='7' cpuset='11'/><br />
<emulatorpin cpuset='0,6'/><br />
<iothreadpin iothread='1' cpuset='0,6'/><br />
</cputune><br />
...<br />
<topology sockets='1' cores='4' threads='2'/><br />
...<br />
</nowiki>}}<br />
<br />
===== 4c/2t AMD CPU example (Before ComboPi AGESA) =====<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<vcpu placement='static'>8</vcpu><br />
<iothreads>1</iothreads><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='2'/><br />
<vcpupin vcpu='1' cpuset='3'/><br />
<vcpupin vcpu='2' cpuset='4'/><br />
<vcpupin vcpu='3' cpuset='5'/><br />
<vcpupin vcpu='4' cpuset='6'/><br />
<vcpupin vcpu='5' cpuset='7'/><br />
<vcpupin vcpu='6' cpuset='8'/><br />
<vcpupin vcpu='7' cpuset='9'/><br />
<emulatorpin cpuset='0-1'/><br />
<iothreadpin iothread='1' cpuset='0-1'/><br />
</cputune><br />
...<br />
<topology sockets='1' cores='4' threads='2'/><br />
...<br />
</nowiki>}}<br />
<br />
{{Note|If further CPU isolation is needed, consider using the '''isolcpus''' kernel command-line parameter on the unused physical/logical cores.}}<br />
<br />
If you do not intend to be doing any computation-heavy work on the host (or even anything at all) at the same time as you would on the VM, you may want to pin your VM threads across all of your cores, so that the VM can fully take advantage of the spare CPU time the host has available. Be aware that pinning all physical and logical cores of your CPU could induce latency in the guest VM.<br />
<br />
=== Huge memory pages ===<br />
<br />
When dealing with applications that require large amounts of memory, memory latency can become a problem since the more memory pages are being used, the more likely it is that this application will attempt to access information across multiple memory "pages", which is the base unit for memory allocation. Resolving the actual address of the memory page takes multiple steps, and so CPUs normally cache information on recently used memory pages to make subsequent uses on the same pages faster. Applications using large amounts of memory run into a problem where, for instance, a virtual machine uses 4 GiB of memory divided into 4 KiB pages (which is the default size for normal pages) for a total of 1.04 million pages, meaning that such cache misses can become extremely frequent and greatly increase memory latency. Huge pages exist to mitigate this issue by giving larger individual pages to those applications, increasing the odds that multiple operations will target the same page in succession.<br />
<br />
==== Transparent huge pages ====<br />
<br />
QEMU will use 2MiB sized transparent huge pages automatically without any explicit configuration in QEMU or Libvirt, subject to some important caveats. When using VFIO the pages are locked in at boot time and transparent huge pages are allocated up front when the VM first boots. If the kernel memory is highly fragmented, or the VM is using a majority of the remaining free memory, it is likely that the kernel will not have enough 2MiB pages to fully satisfy the allocation. In such a case, it silently fails by using a mix of 2MiB and 4KiB pages. Since the pages are locked in VFIO mode, the kernel will not be able to convert those 4KiB pages to huge after the VM starts either. The number of available 2MiB huge pages available to THP is the same as via the [[#Dynamic huge pages]] mechanism described in the following sections.<br />
<br />
To check how much memory THP is using globally:<br />
<br />
{{hc|$ grep AnonHugePages /proc/meminfo|<br />
AnonHugePages: 8091648 kB<br />
}}<br />
<br />
To check a specific QEMU instance. QEMU's PID must be substituted in the grep command:<br />
<br />
{{hc|$ grep -P 'AnonHugePages:\s+(?!0)\d+' /proc/[PID]/smaps|<br />
AnonHugePages: 8087552 kB<br />
}}<br />
<br />
In this example, the VM was allocated 8388608KiB of memory, but only 8087552KiB was available via THP. The remaining 301056KiB are allocated as 4KiB pages. Aside from manually checking, there is no indication when partial allocations occur. As such, THP's effectiveness is very much dependent on the host system's memory fragmentation at the time of VM startup. If this trade off is unacceptable or strict guarantees are required, [[#Static huge pages]] is recommended.<br />
<br />
Arch kernels have THP compiled in and enabled by default with {{ic|1=/sys/kernel/mm/transparent_hugepage/enabled}} set to {{ic|1=madvise}} mode.<br />
<br />
==== Static huge pages ====<br />
<br />
While transparent huge pages should work in the vast majority of cases, they can also be allocated statically during boot. This should only be needed to make use 1 GiB hugepages on machines that support it, since transparent huge pages normally only go up to 2 MiB.<br />
<br />
{{Warning|Static huge pages lock down the allocated amount of memory, making it unavailable for applications that are not configured to use them. Allocating 4 GiBs worth of huge pages on a machine with 8 GiB of memory will only leave you with 4 GiB of available memory on the host '''even when the VM is not running'''.}}<br />
<br />
To allocate huge pages at boot, one must simply specify the desired amount on their kernel command line with {{ic|1=hugepages=''x''}}. For instance, reserving 1024 pages with {{ic|1=hugepages=1024}} and the default size of 2048 KiB per huge page creates 2 GiB worth of memory for the virtual machine to use.<br />
<br />
If supported by CPU page size could be set manually. 1 GiB huge page support could be verified by {{ic|grep pdpe1gb /proc/cpuinfo}}. Setting 1 GiB huge page size via kernel parameters : {{ic|1=default_hugepagesz=1G hugepagesz=1G hugepages=X}}.<br />
<br />
Also, since static huge pages can only be used by applications that specifically request it, you must add this section in your libvirt domain configuration to allow kvm to benefit from them :<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<memoryBacking><br />
<hugepages/><br />
</memoryBacking><br />
...<br />
</nowiki>}}<br />
<br />
==== Dynamic huge pages ====<br />
<br />
{{Accuracy|Need futher testing if this variant as effective as static one}}<br />
<br />
Hugepages could be allocated manually via {{ic|vm.nr_overcommit_hugepages}} [[sysctl]] parameter.<br />
<br />
{{hc|/etc/sysctl.d/10-kvm.conf|2=<br />
vm.nr_hugepages = 0<br />
vm.nr_overcommit_hugepages = ''num''<br />
}}<br />
<br />
Where {{ic|''num''}} - is the number of huge pages, which default size if 2 MiB.<br />
Pages will be automatically allocated, and freed after VM stops.<br />
<br />
More manual way:<br />
<br />
# echo ''num'' > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages<br />
# echo ''num'' > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages<br />
<br />
For 2 MiB and 1 GiB page size respectively.<br />
And they should be manually freed in the same way.<br />
<br />
It is hardly recommended to drop caches, compact memory and wait couple of seconds before starting VM, as there could be not enough free contiguous memory for required huge pages blocks. Especially after some uptime of the host system.<br />
<br />
# echo 3 > /proc/sys/vm/drop_caches<br />
# echo 1 > /proc/sys/vm/compact_memory<br />
<br />
Theoretically, 1 GiB pages works as 2 MiB. But practically - no guaranteed way was found to get contiguous 1 GiB memory blocks. Each consequent request of 1 GiB blocks lead to lesser and lesser dynamically allocated count.<br />
<br />
=== CPU frequency governor ===<br />
<br />
Depending on the way your [[CPU frequency scaling|CPU governor]] is configured, the VM threads may not hit the CPU load thresholds for the frequency to ramp up. Indeed, KVM cannot actually change the CPU frequency on its own, which can be a problem if it does not scale up with vCPU usage as it would result in underwhelming performance. An easy way to see if it behaves correctly is to check if the frequency reported by {{ic|watch lscpu}} goes up when running a CPU-intensive task on the guest. If you are indeed experiencing stutter and the frequency does not go up to reach its reported maximum, it may be due to [https://lime-technology.com/forum/index.php?topic=46664.msg447678#msg447678 cpu scaling being controlled by the host OS]. In this case, try setting all cores to maximum frequency to see if this improves performance. Note that if you are using a modern intel chip with the default pstate driver, cpupower commands will be [[CPU frequency scaling#CPU frequency driver|ineffective]], so monitor {{ic|/proc/cpuinfo}} to make sure your cpu is actually at max frequency.<br />
<br />
=== Isolating pinned CPUs ===<br />
<br />
CPU pinning by itself won't prevent other host processes from running on the pinned CPUs. Properly isolating the pinned CPUs can reduce latency in the guest VM.<br />
<br />
==== With isolcpus kernel parameter ====<br />
<br />
In this example, let us assume you are using CPUs 4-7.<br />
Use the [[kernel parameters]] {{ic|isolcpus nohz_full rcu_nocbs}} to completely isolate the CPUs from the kernel. For example:<br />
<br />
isolcpus=4-7 nohz_full=4-7 rcu_nocbs=4-7<br />
<br />
Then, run {{ic|qemu-system-x86_64}} with taskset and chrt:<br />
<br />
# chrt -r 1 taskset -c 4-7 qemu-system-x86_64 ...<br />
<br />
The {{ic|chrt}} command will ensure that the task scheduler will round-robin distribute work (otherwise it will all stay on the first cpu). For {{ic|taskset}}, the CPU numbers can be comma- and/or dash-separated, like "0,1,2,3" or "0-4" or "1,7-8,10" etc.<br />
<br />
See [https://www.removeddit.com/r/VFIO/comments/6vgtpx/high_dpc_latency_and_audio_stuttering_on_windows/dm0sfto/ this Removeddit mirror of a Reddit thread] for more info. ([https://www.reddit.com/r/VFIO/comments/6vgtpx/high_dpc_latency_and_audio_stuttering_on_windows/dm0sfto/ The original thread] is worthless because of deleted comments.)<br />
<br />
==== Dynamically isolating CPUs ====<br />
<br />
The isolcpus kernel parameter will permanently reserve CPU cores, even when the guest isn't running. A more flexible alternative is to use the cset tool from {{AUR|cpuset-git}} to dynamically isolate CPUs when starting the guest.<br />
<br />
See this [https://www.redhat.com/archives/vfio-users/2016-September/msg00072.html vfio-users post] for more info, as well as this [https://rokups.github.io/#!pages/gaming-vm-performance.md blog post] and this [https://github.com/PassthroughPOST/VFIO-Tools/blob/master/libvirt_hooks/hooks/cset.sh script] for working examples.<br />
<br />
=== Improving performance on AMD CPUs ===<br />
<br />
Previously, Nested Page Tables (NPT) had to be disabled on AMD systems running KVM to improve GPU performance because of a [https://sourceforge.net/p/kvm/bugs/230/ very old bug], but the trade off was decreased CPU performance, including stuttering.<br />
<br />
There is a [https://patchwork.kernel.org/patch/10027525/ kernel patch] that resolves this issue, which was accepted into kernel 4.14-stable and 4.9-stable. If you are running the official {{Pkg|linux}} or {{Pkg|linux-lts}} kernel the patch has already been applied (make sure you are on the latest). If you are running another kernel you might need to manually patch yourself.<br />
<br />
{{Note|Several Ryzen users (see [https://www.reddit.com/r/VFIO/comments/78i3jx/possible_fix_for_the_npt_issue_discussed_on_iommu/ this Reddit thread]) have tested the patch, and can confirm that it works, bringing GPU passthrough performance up to near native quality.}}<br />
<br />
Starting with QEMU 3.1 the TOPOEXT cpuid flag is disabled by default. In order to use hyperthreading(SMT) on AMD CPUs you need to manually enable it:<br />
<br />
<cpu mode='host-passthrough' check='none'><br />
<topology sockets='1' cores='4' threads='2'/><br />
<feature policy='require' name='topoext'/><br />
</cpu><br />
<br />
commit: https://git.qemu.org/?p=qemu.git;a=commit;h=7210a02c58572b2686a3a8d610c6628f87864aed<br />
<br />
=== Further tuning ===<br />
<br />
More specialized VM tuning tips are available at [https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html-single/Virtualization_Tuning_and_Optimization_Guide/index.html Red Hat's Virtualization Tuning and Optimization Guide]. This guide may be especially helpful if you are experiencing:<br />
<br />
* Moderate CPU load on the host during downloads/uploads from within the guest: See ''Bridge Zero Copy Transmit'' for a potential fix.<br />
* Guests capping out at certain network speeds during downloads/uploads despite virtio-net being used: See ''Multi-queue virtio-net'' for a potential fix.<br />
* Guests "stuttering" under high I/O, despite the same workload not affecting hosts to the same degree: See ''Multi-queue virtio-scsi'' for a potential fix.<br />
<br />
== Special procedures ==<br />
<br />
Certain setups require specific configuration tweaks in order to work properly. If you are having problems getting your host or your VM to work properly, see if your system matches one of the cases below and try adjusting your configuration accordingly.<br />
<br />
=== Using identical guest and host GPUs ===<br />
<br />
{{Expansion|A number of users have been having issues with this, it should probably be adressed by the article.|Talk:PCI passthrough via OVMF#Additionnal sections}}<br />
<br />
Due to how vfio-pci uses your vendor and device id pair to identify which device they need to bind to at boot, if you have two GPUs sharing such an ID pair you will not be able to get your passthough driver to bind with just one of them. This sort of setup makes it necessary to use a script, so that whichever driver you are using is instead assigned by pci bus address using the {{ic|driver_override}} mechanism.<br />
<br />
==== Script variants ====<br />
<br />
===== Passthrough all GPUs but the boot GPU =====<br />
<br />
Here, we will make a script to bind vfio-pci to all GPUs but the boot gpu. Create the script {{ic|/usr/local/bin/vfio-pci-override.sh}}:<br />
<br />
{{bc|<nowiki><br />
#!/bin/sh<br />
<br />
for i in /sys/bus/pci/devices/*/boot_vga; do<br />
if [ $(cat "$i") -eq 0 ]; then<br />
GPU="${i%/boot_vga}"<br />
AUDIO="$(echo "$GPU" | sed -e "s/0$/1/")"<br />
echo "vfio-pci" > "$GPU/driver_override"<br />
if [ -d "$AUDIO" ]; then<br />
echo "vfio-pci" > "$AUDIO/driver_override"<br />
fi<br />
fi<br />
done<br />
<br />
modprobe -i vfio-pci<br />
</nowiki>}}<br />
<br />
===== Passthrough selected GPU =====<br />
<br />
In this case we manually specify the GPU to bind.<br />
<br />
{{bc|<nowiki><br />
#!/bin/sh<br />
<br />
DEVS="0000:03:00.0 0000:03:00.1"<br />
<br />
if [ ! -z "$(ls -A /sys/class/iommu)" ]; then<br />
for DEV in $DEVS; do<br />
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override<br />
done<br />
fi<br />
</nowiki>}}<br />
<br />
==== Script installation ====<br />
<br />
Edit {{ic|/etc/mkinitcpio.conf}}:<br />
<br />
# Add {{ic|modconf}} to the [[mkinitcpio#HOOKS|HOOKS]] array and {{ic|/usr/local/bin/vfio-pci-override.sh}} to the [[mkinitcpio#BINARIES and FILES|FILES]] array.<br />
<br />
Edit {{ic|/etc/modprobe.d/vfio.conf}}:<br />
<br />
# Add the following line: {{ic|install vfio-pci /usr/local/bin/vfio-pci-override.sh}}<br />
# [[Regenerate the initramfs]] and reboot.<br />
<br />
=== Passing the boot GPU to the guest ===<br />
<br />
{{Expansion|This is related to VBIOS issues and should be moved into a separate section regarding VBIOS compatibility.|section=UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
The GPU marked as {{ic|boot_vga}} is a special case when it comes to doing PCI passthroughs, since the BIOS needs to use it in order to display things like boot messages or the BIOS configuration menu. To do that, it makes [https://www.redhat.com/archives/vfio-users/2016-May/msg00224.html a copy of the VGA boot ROM which can then be freely modified]. This modified copy is the version the system gets to see, which the passthrough driver may reject as invalid. As such, it is generally recommended to change the boot GPU in the BIOS configuration so the host GPU is used instead or, if that is not possible, to swap the host and guest cards in the machine itself.<br />
<br />
=== Using Looking Glass to stream guest screen to the host ===<br />
<br />
It is possible to make VM share the monitor, and optionally a keyboard and a mouse with a help of [https://looking-glass.hostfission.com/ Looking Glass].<br />
<br />
==== Adding IVSHMEM Device to VM ====<br />
<br />
Looking glass works by creating a shared memory buffer between a host and a guest. This is a lot faster than streaming frames via localhost, but requires additional setup. <br />
<br />
With your VM turned off open the machine configuration<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<devices><br />
...<br />
<shmem name='looking-glass'><br />
<model type='ivshmem-plain'/><br />
<size unit='M'>32</size><br />
</shmem><br />
</devices><br />
...<br />
</nowiki>}}<br />
<br />
You should replace 32 with your own calculated value based on what resolution you are going to pass through. It can be calculated like this:<br />
<br />
width x height x 4 x 2 = total bytes<br />
total bytes / 1024 / 1024 = total mebibytes + 2<br />
<br />
For example, in case of 1920x1080<br />
<br />
1920 x 1080 x 4 x 2 = 16,588,800 bytes<br />
16,588,800 / 1024 / 1024 = 15.82 MiB + 2 = 17.82<br />
<br />
The result must be '''rounded up''' to the nearest power of two, and since 17.82 is bigger than 16 we should choose 32.<br />
<br />
Next create a configuration file to create the shared memory file on boot<br />
<br />
{{hc|/etc/tmpfiles.d/10-looking-glass.conf|2=<br />
f /dev/shm/looking-glass 0660 '''user''' kvm -<br />
}}<br />
<br />
Replace user with your username.<br />
<br />
Ask systemd-tmpfiles to create the shared memory file now without waiting to next boot<br />
<br />
# systemd-tmpfiles --create /etc/tmpfiles.d/10-looking-glass.conf<br />
<br />
==== Installing the IVSHMEM Host to Windows guest ====<br />
<br />
Currently Windows would not notify users about a new IVSHMEM device, it would silently install a dummy driver. To actually enable the device you have to go into device manager and update the driver for the device under the "System Devices" node for '''"PCI standard RAM Controller"'''. Download the signed driver [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/upstream-virtio/ from Red Hat].<br />
<br />
Once the driver is installed you must download a matching [https://github.com/gnif/LookingGlass/releases looking-glass-host] that matches the client you installed from AUR and start it on your guest. In order to run it you would also need to install Microsoft Visual C++ Redistributable from [https://www.visualstudio.com/downloads/ Microsoft]<br />
It is also possible to make it start automatically on VM boot by editing the {{ic|HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Run}} registry and adding a path to the downloaded executable.<br />
<br />
==== Getting a client ====<br />
<br />
Looking glass client can be installed from AUR using {{AUR|looking-glass}} or {{AUR|looking-glass-git}} packages.<br />
<br />
You can start it once the VM is set up and running<br />
<br />
$ looking-glass-client<br />
<br />
If you do not want to use Spice to control the guest mouse and keyboard you can disable the Spice server.<br />
<br />
$ looking-glass-client -s<br />
<br />
Additionally you may want to start Looking Glass Client as full screen, otherwise the image may be scaled down resulting in poor image fidelity.<br />
<br />
$ looking-glass-client -F<br />
<br />
Launch with the {{ic|--help}} option for further information.<br />
<br />
=== Swap peripherals to and from the Host ===<br />
<br />
Looking Glass includes a Spice client in order to control mouse movement on the Windows guest. However this may have too much latency for certain applications, such as gaming. An alternative method is passing through specific USB devices for minimal latency. This allows for switching the devices between host and guest.<br />
<br />
First create a .xml file for the device(s) you wish to pass-through, which libvirt will use to identify the device.<br />
<br />
{{hc|~/.VFIOinput/input_1.xml|2=<br />
<hostdev mode='subsystem' type='usb' managed='no'><br />
<source><br />
<vendor id='0x[Before Colon]'/><br />
<product id='0x[After Colon]'/><br />
</source><br />
</hostdev><br />
}}<br />
<br />
Replace [Before/After Colon] with the contents of the 'lsusb' command, specific to the device you want to pass-through.<br />
<br />
For instance my mouse is {{ic|Bus 005 Device 002: ID 1532:0037 Razer USA, Ltd}} so I would replace {{ic|vendor id}} with 1532, and {{ic|product id}} with 1037.<br />
<br />
Repeat this process for any additional USB devices you want to pass-through. If your mouse / keyboard has multiple entries in {{ic|lsusb}}, perhaps if it is wireless, then create additional xml files for each.<br />
<br />
{{Note|Do not forget to change the path & name of the script(s) above and below to match your user and specific system.}}<br />
<br />
Next a bash script file is needed to tell libvirt what to attach/detach the USB devices to the guest.<br />
<br />
{{hc|~/.VFIOinput/input_attach.sh|2=<br />
#!/bin/bash<br />
<br />
virsh attach-device [VM-Name] [USBdevice]<br />
}}<br />
<br />
Replace [VM-Name] with the name of your virtual machine, which can be seen under virt-manager. Additionally replace [USBdevice] with the '''full''' path to the .xml file for the device you wish to pass-through. Add additional lines for more than 1 device. For example here is my script:<br />
<br />
{{hc|~/.VFIOinput/input_attach.sh|2=<br />
#!/bin/bash<br />
<br />
virsh attach-device win10 /home/$USER/.VFIOinput/input_mouse.xml<br />
virsh attach-device win10 /home/$USER/.VFIOinput/input_keyboard.xml<br />
}}<br />
<br />
Next duplicate the script file and replace {{ic|attach-device}} with {{ic|detach-device}}. Ensure both scripts are executable with {{ic|chmod +x $script.sh}}<br />
<br />
This 2 script files can now be executed to attach or detach your USB devices from the host to the guest VM. It is important to note that they may need to be executed as root. To run the script from the Windows VM, one possibility is using [[PuTTY]] to [[SSH]] into the host, and execute the script. On Windows PuTTY comes with plink.exe which can execute singular commands over SSH before then logging out, instead of opening a SSH terminal, all in the background.<br />
<br />
{{hc|detach_devices.bat|2=<br />
"C:\Program Files\PuTTY\plink.exe" root@$HOST_IP -pw $ROOTPASSWORD /home/$USER/.VFIOinput/input_detach.sh<br />
}}<br />
<br />
Replace {{ic|$HOST_IP}} with the Host [[Network configuration#IP addresses|IP Address]] and $ROOTPASSWORD with the root password.<br />
<br />
{{warning|This method is insecure if somebody has access to your VM, since they could open the file and read your password. It is advisable to use [[SSH keys]] instead!}}<br />
<br />
You may also want to execute the script files using key binds. On Windows one option is [https://autohotkey.com/ Autohotkey], and on the Host [[Xbindkeys]]. Because of the need to run the scripts as root, you may also need to use [[Polkit]] or [[Sudo]] which can both be used to authenticate specific executables as able to run as root without needing a password.<br />
<br />
=== Bypassing the IOMMU groups (ACS override patch) ===<br />
<br />
If you find your PCI devices grouped among others that you do not wish to pass through, you may be able to seperate them using Alex Williamson's ACS override patch. Make sure you understand [https://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html the potential risk] of doing so. <br />
<br />
You will need a kernel with the patch applied. The easiest method to acquiring this is through the {{AUR|linux-vfio}} package. <br />
<br />
In addition, the ACS override patch needs to be enabled with kernel command line options. The patch file adds the following documentation:<br />
<br />
pcie_acs_override =<br />
[PCIE] Override missing PCIe ACS support for:<br />
downstream<br />
All downstream ports - full ACS capabilties<br />
multifunction<br />
All multifunction devices - multifunction ACS subset<br />
id:nnnn:nnnn<br />
Specfic device - full ACS capabilities<br />
Specified as vid:did (vendor/device ID) in hex<br />
<br />
The option {{ic|1=pcie_acs_override=downstream,multifunction}} should break up as many devices as possible.<br />
<br />
After installation and configuration, reconfigure your [[Kernel parameters|bootloader kernel parameters]] to load the new kernel with the {{ic|1=pcie_acs_override=}} option enabled.<br />
<br />
== Plain QEMU without libvirt ==<br />
<br />
Instead of setting up a virtual machine with the help of libvirt, plain QEMU commands with custom parameters can be used for running the VM intended to be used with PCI passthrough. This is desirable for some use cases like scripted setups, where the flexibility for usage with other scripts is needed.<br />
<br />
To achieve this after [[#Setting up IOMMU]] and [[#Isolating the GPU]], follow the [[QEMU]] article to setup the virtualized environment, [[QEMU#Enabling KVM|enable KVM]] on it and use the flag {{ic|1=-device vfio-pci,host=07:00.0}} replacing the identifier (07:00.0) with your actual device's ID that you used for the GPU isolation earlier.<br />
<br />
For utilizing the OVMF firmware, make sure the {{Pkg|ovmf}} package is installed, copy the UEFI variables from {{ic|/usr/share/ovmf/x64/OVMF_VARS.fd}} to temporary location like {{ic|/tmp/MY_VARS.fd}} and finally specify the OVMF paths by appending the following parameters to the QEMU command (order matters):<br />
<br />
* {{ic|1=-drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/x64/OVMF_CODE.fd}} for the actual OVMF firmware binary, note the readonly option<br />
* {{ic|1=-drive if=pflash,format=raw,file=/tmp/MY_VARS.fd}} for the variables<br />
<br />
{{Note|QEMU's default SeaBIOS can be used instead of OVMF, but it is not recommended as it can cause issues with passthrough setups.}}<br />
<br />
It is recommended to study the QEMU article for ways to enhance the performance by using the [[QEMU#Installing virtio drivers|virtio drivers]] and other further configurations for the setup.<br />
<br />
You also might have to use the {{ic|1=-cpu host,kvm=off}} parameter to forward the host's CPU model info to the VM and fool the virtualization detection used by Nvidia's and possibly other manufacturers' device drivers trying to block the full hardware usage inside a virtualized system.<br />
<br />
== Passing through other devices ==<br />
<br />
=== USB controller ===<br />
<br />
If your motherboard has multiple USB controllers mapped to multiple groups, it is possible to pass those instead of USB devices. Passing an actual controller over an individual USB device provides the following advantages : <br />
<br />
* If a device disconnects or changes ID over the course of an given operation (such as a phone undergoing an update), the VM will not suddenly stop seeing it.<br />
* Any USB port managed by this controller is directly handled by the VM and can have its devices unplugged, replugged and changed without having to notify the hypervisor.<br />
* Libvirt will not complain if one of the USB devices you usually pass to the guest is missing when starting the VM.<br />
<br />
Unlike with GPUs, drivers for most USB controllers do not require any specific configuration to work on a VM and control can normally be passed back and forth between the host and guest systems with no side effects.<br />
<br />
{{Warning|Make sure your USB controller supports resetting: [[#Passing through a device that does not support resetting]]}}<br />
<br />
You can find out which PCI devices correspond to which controller and how various ports and devices are assigned to each one of them using this command :<br />
<br />
{{hc|$ <nowiki>for usb_ctrl in $(find /sys/bus/usb/devices/usb* -maxdepth 0 -type l); do pci_path="$(dirname "$(realpath "${usb_ctrl}")")"; echo "Bus $(cat "${usb_ctrl}/busnum") --> $(basename $pci_path) (IOMMU group $(basename $(realpath $pci_path/iommu_group)))"; lsusb -s "$(cat "${usb_ctrl}/busnum"):"; echo; done</nowiki>|<br />
Bus 1 --> 0000:00:1a.0 (IOMMU group 4)<br />
Bus 001 Device 004: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP)<br />
Bus 001 Device 007: ID 0a5c:21e6 Broadcom Corp. BCM20702 Bluetooth 4.0 [ThinkPad]<br />
Bus 001 Device 008: ID 0781:5530 SanDisk Corp. Cruzer<br />
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />
<br />
Bus 2 --> 0000:00:1d.0 (IOMMU group 9)<br />
Bus 002 Device 006: ID 0451:e012 Texas Instruments, Inc. TI-Nspire Calculator<br />
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />
}}<br />
<br />
This laptop has 3 USB ports managed by 2 USB controllers, each with their own IOMMU group. In this example, Bus 001 manages a single USB port (with a SanDisk USB pendrive plugged into it so it appears on the list), but also a number of internal devices, such as the internal webcam and the bluetooth card. Bus 002, on the other hand, does not apprear to manage anything except for the calculator that is plugged into it. The third port is empty, which is why it does not show up on the list, but is actually managed by Bus 002.<br />
<br />
Once you have identified which controller manages which ports by plugging various devices into them and decided which one you want to passthrough, simply add it to the list of PCI host devices controlled by the VM in your guest configuration. No other configuration should be needed.<br />
<br />
{{Note|If your USB controller does not support resetting, is not in an isolated group, or is otherwise unable to be passed through then it may still be possible to accomplish similar results through [[udev]] rules. See [https://github.com/olavmrk/usb-libvirt-hotplug] which allows any device connected to specified USB ports to be automatically attached to a virtual machine.}}<br />
<br />
=== Passing VM audio to host via PulseAudio ===<br />
<br />
It is possible to route the virtual machine's audio to the host as an application using libvirt. This has the advantage of multiple audio streams being routable to one host output, and working with audio output devices that do not support passthrough. [[PulseAudio]] is required for this to work.<br />
<br />
First, remove the comment from the {{ic|1=#user = ""}} line. Then add your username in the quotations. This tells QEMU which user's pulseaudio stream to route through.<br />
<br />
{{hc|/etc/libvirt/qemu.conf|2=<br />
user = "example"<br />
}}<br />
<br />
Next, modify the libvirt configuration<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm'><br />
}}<br />
<br />
to<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm' xmlns:qemu<nowiki>='http://libvirt.org/schemas/domain/qemu/1.0'</nowiki>><br />
}}<br />
<br />
Then set the QEMU PulseAudio environment variables at the bottom of the libvirt xml file<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
</devices><br />
</domain><br />
}}<br />
<br />
to<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
</devices><br />
<qemu:commandline><br />
<qemu:env name<nowiki>=</nowiki>'QEMU_AUDIO_DRV' value<nowiki>=</nowiki>'pa'/><br />
<qemu:env name<nowiki>=</nowiki>'QEMU_PA_SERVER' value<nowiki>=</nowiki>'/run/user/1000/pulse/native'/><br />
</qemu:commandline><br />
</domain><br />
}}<br />
<br />
Change 1000 under the user directory to your user uid (which can be found by running the {{ic|id}} command. Remember to save the file and exit it without ending the process before continuing, otherwise the changes will not register. If you get the message {{ic|<nowiki>Domain [vmname] XML configuration edited.</nowiki>}} after exiting, it means that your changes have been applied.<br />
<br />
[[Restart]] {{ic|libvirtd.service}} and [[systemd/User|user service]] {{ic|pulseaudio.service}}.<br />
<br />
Virtual Machine audio will now be routed through the host as an application. The application {{Pkg|pavucontrol}} can be used to control the output device. Be aware that on Windows guests, this can cause audio crackling without [[#Slowed down audio pumped through HDMI on the video card|using Message-Signaled Interrupts.]]<br />
<br />
==== QEMU 3.0 audio changes ====<br />
<br />
As of QEMU 3.0 part of the audio patches have been merged ([https://www.reddit.com/r/VFIO/comments/97iuov/qemu_30_released/e49wmyd/ reddit link]). The {{AUR|qemu-patched}}{{Broken package link|package not found}} package currently includes some additional audio patches, as some of the patches have not been officially up-streamed yet.<br />
<br />
You will need to change the chipset accordingly to how your VM is set up, i.e. {{ic|pc-q35-3.0}} or {{ic|pc-i440fx-3.0}} (after installing qemu 3.0) to use the new code paths:<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm'><br />
...<br />
<os><br />
<type arch<nowiki>=</nowiki>'x86_64' machine<nowiki>=</nowiki>'pc-q35-3.0'>hvm</type><br />
...<br />
</os><br />
}}<br />
<br />
{{hc|$ virsh edit [vmname]|<br />
<domain type<nowiki>=</nowiki>'kvm'><br />
...<br />
<os><br />
<type arch<nowiki>=</nowiki>'x86_64' machine<nowiki>=</nowiki>'pc-i440fx-3.0'>hvm</type><br />
...<br />
</os><br />
}}<br />
<br />
{{Note|<br />
* To speed up compilation time with {{AUR|qemu-patched}}{{Broken package link|package not found}} use {{ic|1=--target-list=x86_64-softmmu}} to compile qemu with only x86_64 guest support.<br />
* Since Qemu 3.0 the XML arguments {{ic|qemu:env}} above are ''not'' needed if you run PulseAudio as your user and you have {{ic|1= nographics_allow_host_audio = 1}} enabled in {{ic|1=/etc/libvirt/qemu.conf}}. If you use a different user with QEMU/Libvirt, you will need to keep the {{ic|QEMU_PA_SERVER}} variable otherwise permission errors will occur.<br />
}}<br />
<br />
=== Passing VM audio to host via Scream and IVSHMEM ===<br />
<br />
It is possible to pass VM audio through a IVSHMEM device to the host using [https://github.com/duncanthrax/scream scream]. <br />
This guide will only cover using PulseAudio as a receiver on the host.<br />
See the project page for more details.<br />
<br />
==== Adding the IVSHMEM ====<br />
<br />
With the VM turned off, edit the machine configuration<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<devices><br />
...<br />
<shmem name='scream-ivshmem'><br />
<model type='ivshmem-plain'/><br />
<size unit='M'>2</size><br />
</shmem><br />
</devices><br />
...<br />
</nowiki>}}<br />
<br />
In the above configuration, the size of the IVSHMEM device is 2MB (the recommended amount). Change this as needed.<br />
<br />
Now refer to [[#Adding IVSHMEM Device to VM]] to configure the host to create the shared memory file on boot, replacing {{ic|looking-glass}} with {{ic|scream-ivshmem}}.<br />
<br />
==== Configuring the Windows guest ====<br />
<br />
The correct driver must be installed for the IVSHMEM device on the guest. <br />
See [[#Installing the IVSHMEM Host to Windows guest]]. Ignore the part about {{ic|looking-glass-host}}.<br />
<br />
Install the [https://github.com/duncanthrax/scream/releases scream] virtual audio driver on the guest. <br />
If you have secure boot enabled for your VM, you may need to disable it. <br />
<br />
Using the registry editor, set the DWORD {{ic|HKLM\SYSTEM\CurrentControlSet\Services\Scream\Options\UseIVSHMEM}} to the size of the IVSHMEM device in MB.<br />
Note that scream identifies its IVSHMEM device using its size, so make sure there is only one device of that size.<br />
<br />
==== Configuring the host ====<br />
<br />
Install {{AUR|scream-pulse}}.<br />
<br />
Create the systemd user service file to control the reciever<br />
<br />
{{hc|~/.config/systemd/user/scream-ivshmem-pulse.service|<nowiki><br />
[Unit]<br />
Description=Scream IVSHMEM pulse reciever<br />
After=pulseaudio.service<br />
Wants=pulseaudio.service<br />
<br />
[Service]<br />
Type=simple<br />
ExecStartPre=/usr/bin/truncate -s 0 /dev/shm/scream-ivshmem<br />
ExecStartPre=/usr/bin/dd if=/dev/zero of=/dev/shm/scream-ivshmem bs=1M count=2<br />
ExecStart=/usr/bin/scream-ivshmem-pulse /dev/shm/scream-ivshmem<br />
<br />
[Install]<br />
WantedBy=default.target<br />
<br />
</nowiki>}}<br />
<br />
Edit {{ic|1=count=2}} with the size of the IVSHMEM device in MB.<br />
<br />
Now start the service with <br />
$ systemctl start --user scream-ivshmem-pulse<br />
<br />
To have it automatically start on next login, enable the service<br />
$ systemctl enable --user scream-ivshmem-pulse<br />
<br />
=== Physical disk/partition ===<br />
<br />
A whole disk or a partition may be used as a whole for improved I/O performance by adding an entry to the XML<br />
<br />
Virtio-BLK Example:<br />
<br />
{{hc|$ virsh edit [vmname]| <nowiki><br />
<devices><br />
...<br />
<disk type='block' device='disk'><br />
<driver name='qemu' type='raw' cache='none' io='native'/><br />
<source dev='/dev/disk/by-id/xxxxxxxx'/><br />
<target dev='vda' bus='virtio'/><br />
</disk><br />
...<br />
</devices><br />
</nowiki><br />
}}<br />
<br />
Virtio-SCSI Example:<br />
<br />
{{hc|$ virsh edit [vmname]| <nowiki><br />
<devices><br />
...<br />
<disk type='block' device='disk'><br />
<driver name='qemu' type='raw' cache='none' io='native'/><br />
<source dev='/dev/disk/by-id/xxxxxxxx'/><br />
<target dev='sda' bus='scsi'/><br />
</disk><br />
...<br />
</devices><br />
</nowiki><br />
}}<br />
<br />
To find out which disk/partition is associated with the one you would like to pass:<br />
<br />
{{hc|$ ls -l /dev/disk/by-id|<br />
ata-ST1000LM002-9VQ14L_Z0501SZ9 -> ../../sdd<br />
ata-ST1000LM002-9VQ14L_Z0501SZ9-part1 -> ../../sdd1<br />
}}<br />
<br />
You can also add the disk with Virt-Manager's '''Add Hardware''' menu and then type the disk you want in the '''Select or create custom storage''' box, e.g. '''/dev/disk/by-id/ata-ST1000LM002-9VQ14L_Z0501SZ9'''<br />
<br />
Depending on which bus you use, the above step will require either the VIOSTOR(bus=virtio) or VIOSCSI(bus=scsi) driver on Windows guests, refer to [[#Setting up the guest OS]] for the driver ISO.<br />
<br />
=== Gotchas ===<br />
<br />
==== Passing through a device that does not support resetting ====<br />
<br />
When the VM shuts down, all devices used by the guest are deinitialized by its OS in preparation for shutdown. In this state, those devices are no longer functional and must then be power-cycled before they can resume normal operation. Linux can handle this power-cycling on its own, but when a device has no known reset methods, it remains in this disabled state and becomes unavailable. Since Libvirt and Qemu both expect all host PCI devices to be ready to reattach to the host before completely stopping the VM, when encountering a device that will not reset, they will hang in a "Shutting down" state where they will not be able to be restarted until the host system has been rebooted. It is therefore reccomanded to only pass through PCI devices which the kernel is able to reset, as evidenced by the presence of a {{ic|reset}} file in the PCI device sysfs node, such as {{ic|/sys/bus/pci/devices/0000:00:1a.0/reset}}.<br />
<br />
The following bash command shows which devices can and cannot be reset.<br />
<br />
{{hc|<nowiki>for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done</nowiki>|<br />
IOMMU group 0<br />
00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller [8086:0158] (rev 09)<br />
IOMMU group 1<br />
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 720] [10de:1288] (rev a1)<br />
01:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)<br />
IOMMU group 2<br />
00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04)<br />
IOMMU group 4<br />
[RESET] 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04)<br />
IOMMU group 5<br />
[RESET] 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)<br />
IOMMU group 10<br />
[RESET] 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04)<br />
IOMMU group 13<br />
06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
}}<br />
<br />
This signals that the xHCI USB controller in 00:14.0 cannot be reset and will therefore stop the VM from shutting down properly, while the integrated sound card in 00:1b.0 and the other two controllers in 00:1a.0 and 00:1d.0 do not share this problem and can be passed without issue.<br />
<br />
== Complete setups and examples ==<br />
<br />
For many reasons users may seek to see [[PCI_passthrough_via_OVMF/Examples|complete passthrough setup examples]].<br />
<br />
These examples offer a supplement to existing hardware compatibility lists. Additionally, if you have trouble configuring a certain mechanism in your setup, you might find these examples very valuable. Users there have described their setups in detail, and some have provided examples of their configuration files as well. <br />
<br />
We encourage those who successfully build their system from this resource to help improve it by contributing their builds. Due to the many different hardware manufacturers involved, the seemingly significant lack of sufficient documentation, as well as other issues due to the nature of this process, community contributions are necessary.<br />
<br />
== Troubleshooting ==<br />
<br />
If your issue is not mentioned below, you may want to browse [[QEMU#Troubleshooting]].<br />
<br />
=== QEMU 4.0: Unable to load graphics drivers/BSOD/Graphics stutter after driver install using Q35 ===<br />
<br />
Starting with QEMU 4.0, the Q35 machine type changes the default {{ic|<nowiki>kernel_irqchip</nowiki>}} from {{ic|off}} to {{ic|split}} which breaks some guest devices, such as nVidia graphics (the driver fails to load / black screen / code 43 / graphics stutters, usually when mouse moving). Switch to full KVM mode instead by adding {{ic|<nowiki><ioapic driver='kvm'/></nowiki>}} under libvirt's {{ic|<nowiki><features></nowiki>}} tag in your VM configuration or by adding {{ic|<nowiki>kernel_irqchip=on</nowiki>}} in the {{ic|-machine}} QEMU arg.<br />
<br />
=== "Error 43: Driver failed to load" on Nvidia GPUs passed to Windows VMs ===<br />
<br />
{{Note|<br />
* This may also fix SYSTEM_THREAD_EXCEPTION_NOT_HANDLED boot crashes related to Nvidia drivers.<br />
* This may also fix problems under linux guests.<br />
}}<br />
<br />
Since version 337.88, Nvidia drivers on Windows check if an hypervisor is running and fail if it detects one, which results in an Error 43 in the Windows device manager. Starting with QEMU 2.5.0 and libvirt 1.3.3, the vendor_id for the hypervisor can be spoofed, which is enough to fool the Nvidia drivers into loading anyway. All one must do is add {{ic|<nowiki>hv_vendor_id=whatever</nowiki>}} to the hypervisor parameters in their QEMU command line, or by adding the following line to their libvirt domain configuration. The vendor_id can be [https://libvirt.org/formatdomain.html#elementsFeatures any string value up to 12 characters].<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<features><br />
<hyperv><br />
...<br />
<vendor_id state='on' value='whatever'/><br />
...<br />
</hyperv><br />
...<br />
<kvm><br />
<hidden state='on'/><br />
</kvm><br />
</features><br />
...<br />
</nowiki>}}<br />
<br />
Users with older versions of QEMU and/or libvirt will instead have to disable a few hypervisor extensions, which can degrade performance substantially. If this is what you want to do, do the following replacement in your libvirt domain config file.<br />
<br />
{{hc|$ virsh edit [vmname]|<nowiki><br />
...<br />
<features><br />
<hyperv><br />
<relaxed state='on'/><br />
<vapic state='on'/><br />
<spinlocks state='on' retries='8191'/><br />
</hyperv><br />
...<br />
</features><br />
...<br />
<clock offset='localtime'><br />
<timer name='hypervclock' present='yes'/><br />
</clock><br />
...<br />
</nowiki>}}<br />
<br />
{{bc|<nowiki><br />
...<br />
<clock offset='localtime'><br />
<timer name='hypervclock' present='no'/><br />
</clock><br />
...<br />
<features><br />
<kvm><br />
<hidden state='on'/><br />
</kvm><br />
...<br />
<hyperv><br />
<relaxed state='off'/><br />
<vapic state='off'/><br />
<spinlocks state='off'/><br />
</hyperv><br />
...<br />
</features><br />
...<br />
</nowiki>}}<br />
<br />
=== "BAR 3: cannot reserve [mem]" error in dmesg after starting VM ===<br />
<br />
{{Expansion|This error is actually related to the boot_vgs issue and should be merged together with everything else concerning GPU ROMs.|section=UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
With respect to [https://www.linuxquestions.org/questions/linux-kernel-70/kernel-fails-to-assign-memory-to-pcie-device-4175487043/ this article]:<br />
<br />
If you still have code 43 check dmesg for memory reservation errors after starting up VM, if you have similar it could be the case:<br />
<br />
vfio-pci 0000:09:00.0: BAR 3: cannot reserve [mem 0xf0000000-0xf1ffffff 64bit pref]<br />
<br />
Find out a PCI Bridge your graphic card is connected to. This will give actual hierarchy of devices:<br />
<br />
$ lspci -t<br />
<br />
Before starting VM run following lines replacing IDs with actual from previous output.<br />
<br />
# echo 1 > /sys/bus/pci/devices/0000\:00\:03.1/remove<br />
# echo 1 > /sys/bus/pci/rescan<br />
<br />
{{Note|Probably setting [[kernel parameter]] {{ic|1=video=efifb:off}} is required as well. [https://pve.proxmox.com/wiki/Pci_passthrough#BAR_3:_can.27t_reserve_.5Bmem.5D_error Source]}}<br />
<br />
In addition try adding kernel parameter {{ic|1=pci=realloc}} which also [https://github.com/Dunedan/mbp-2016-linux/issues/60#issuecomment-396311301 helps with hotplugging issues].<br />
<br />
=== UEFI (OVMF) compatibility in VBIOS ===<br />
<br />
{{Remove|Flashing you guest GPU for the purpose of a GPU passthrough is '''never''' good advice. A full section should be dedicated to VBIOS compatibility.|section= UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
With respect to [https://pve.proxmox.com/wiki/Pci_passthrough#How_to_known_if_card_is_UEFI_.28ovmf.29_compatible this article]:<br />
<br />
Error 43 can be caused by the GPU's VBIOS without UEFI support. To check whenever your VBIOS supports it, you will have to use {{ic|rom-parser}}:<br />
<br />
$ git clone https://github.com/awilliam/rom-parser<br />
$ cd rom-parser && make<br />
<br />
Dump the GPU VBIOS:<br />
<br />
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom<br />
# cat /sys/bus/pci/devices/0000:01:00.0/rom > /tmp/image.rom<br />
# echo 0 > /sys/bus/pci/devices/0000:01:00.0/rom<br />
<br />
And test it for compatibility:<br />
<br />
{{hc|$ ./rom-parser /tmp/image.rom|<br />
Valid ROM signature found @600h, PCIR offset 190h<br />
PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 1184, class: 030000<br />
PCIR: revision 0, vendor revision: 1<br />
Valid ROM signature found @fa00h, PCIR offset 1ch<br />
PCIR: type 3 (EFI), vendor: 10de, device: 1184, class: 030000<br />
PCIR: revision 3, vendor revision: 0<br />
EFI: Signature Valid, Subsystem: Boot, Machine: X64<br />
Last image<br />
}}<br />
<br />
To be UEFI compatible, you need a "type 3 (EFI)" in the result. If it is not there, try updating your GPU VBIOS. GPU manufacturers often share VBIOS upgrades on their support pages. A large database of known compatible and working VBIOSes (along with their UEFI compatibility status!) is available on [https://www.techpowerup.com/vgabios/ TechPowerUp].<br />
<br />
Updated VBIOS can be used in the VM without flashing. To load it in QEMU:<br />
<br />
-device vfio-pci,host=07:00.0,......,romfile=/path/to/your/gpu/bios.bin \<br />
<br />
And in libvirt:<br />
<br />
{{bc|1=<br />
<hostdev><br />
...<br />
<rom file='/path/to/your/gpu/bios.bin'/><br />
...<br />
</hostdev><br />
}}<br />
<br />
One should compare VBIOS versions between host and guest systems using [https://www.techpowerup.com/download/nvidia-nvflash/ nvflash] (Linux versions under ''Show more versions'') or <br />
[https://www.techpowerup.com/download/techpowerup-gpu-z/ GPU-Z] (in Windows guest). To check the currently loaded VBIOS:<br />
<br />
{{hc|$ ./nvflash --version|<br />
...<br />
Version : 80.04.XX.00.97<br />
...<br />
UEFI Support : No<br />
UEFI Version : N/A<br />
UEFI Variant Id : N/A ( Unknown )<br />
UEFI Signer(s) : Unsigned<br />
...<br />
}}<br />
<br />
And to check a given VBIOS file:<br />
<br />
{{hc|$ ./nvflash --version NV299MH.rom|<br />
...<br />
Version : 80.04.XX.00.95<br />
...<br />
UEFI Support : Yes<br />
UEFI Version : 0x10022 (Jul 2 2013 @ 16377903 )<br />
UEFI Variant Id : 0x0000000000000004 ( GK1xx )<br />
UEFI Signer(s) : Microsoft Corporation UEFI CA 2011<br />
...<br />
}}<br />
<br />
If the external ROM did not work as it should in the guest, you will have to flash the newer VBIOS image to the GPU. In some cases it is possible to create your own VBIOS image with UEFI support using [https://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html GOPUpd] tool, however this is risky and may result in GPU brick.<br />
<br />
{{Warning|Failure during flashing may "brick" your GPU - recovery may be possible, but rarely easy and often requires additional hardware. '''DO NOT''' flash VBIOS images for other GPU models (different boards may use different VBIOSes, clocks, fan configuration). If it breaks, you get to keep all the pieces.}}<br />
<br />
In order to avoid the irreparable damage to your graphics adapter it is necessary to unload the NVIDIA kernel driver first:<br />
<br />
# modprobe -r nvidia_modeset nvidia <br />
<br />
Flashing the VBIOS can be done with:<br />
<br />
# ./nvflash romfile.bin<br />
<br />
{{Warning|'''DO NOT''' interrupt the flashing process, even if it looks like it is stuck. Flashing should take about a minute on most GPUs, but may take longer.}}<br />
<br />
=== Slowed down audio pumped through HDMI on the video card ===<br />
<br />
For some users VM's audio slows down/starts stuttering/becomes demonic after a while when it is pumped through HDMI on the video card. This usually also slows down graphics.<br />
A possible solution consists of enabling MSI (Message Signaled-Based Interrupts) instead of the default (Line-Based Interrupts).<br />
<br />
In order to check whether MSI is supported or enabled, run the following command as root:<br />
<br />
# lspci -vs $device | grep 'MSI:'<br />
<br />
where `$device` is the card's address (e.g. `01:00.0`).<br />
<br />
The output should be similar to:<br />
<br />
Capabilities: [60] MSI: Enable'''-''' Count=1/1 Maskable- 64bit+<br />
<br />
A {{ic|-}} after {{ic|Enable}} means MSI is supported, but not used by the VM, while a {{ic|+}} says that the VM is using it.<br />
<br />
The procedure to enable it is quite complex, instructions and an overview of the setting can be found [https://forums.guru3d.com/showthread.php?t=378044 here].<br />
<br />
On a linux guest you can use modinfo to see if there is option to enable MSI (for example: "modinfo snd_hda_intel |grep msi"). If there is, one can enable it by adding the relevant option to a custom omdprobe file - in "/etc/modprobe.d/snd-hda-intel.conf" inserting "options snd-hda-intel enable_msi=1"<br />
<br />
Other hints can be found on the [https://lime-technology.com/wiki/index.php/UnRAID_6/VM_Guest_Support#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support lime-technology's wiki], or on this article on [https://vfio.blogspot.it/2014/09/vfio-interrupts-and-how-to-coax-windows.html VFIO tips and tricks].<br />
<br />
A UI tool called [https://github.com/CHEF-KOCH/MSI-utility MSI Utility (FOSS Version 2)] works with Windows 10 64-bit and simplifies the process.<br />
<br />
In order to fix the issues enabling MSI on the 0 function of a nVidia card ({{ic|01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) (prog-if 00 [VGA controller])}}) was not enough; it will also be required to enable it on the other function ({{ic|01:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)}}) to fix the issue.<br />
<br />
=== No HDMI audio output on host when intel_iommu is enabled ===<br />
<br />
If after enabling {{ic|intel_iommu}} the HDMI output device of Intel GPU becomes unusable on the host then setting the option {{ic|igfx_off}} (i.e. {{ic|1=intel_iommu=on,igfx_off}}) might bring the audio back, please read {{ic|Graphics Problems?}} in [https://www.kernel.org/doc/Documentation/Intel-IOMMU.txt Intel-IOMMU.txt] for details about setting {{ic|igfx_off}}.<br />
<br />
=== X does not start after enabling vfio_pci ===<br />
<br />
This is related to the host GPU being detected as a secondary GPU, which causes X to fail/crash when it tries to load a driver for the guest GPU. To circumvent this, a Xorg configuration file specifying the BusID for the host GPU is required. The correct BusID can be acquired from lspci or the Xorg log. [https://www.redhat.com/archives/vfio-users/2016-August/msg00025.html Source →]<br />
<br />
{{hc|/etc/X11/xorg.conf.d/10-intel.conf|<nowiki><br />
Section "Device"<br />
Identifier "Intel GPU"<br />
Driver "modesetting"<br />
BusID "PCI:0:2:0"<br />
EndSection<br />
</nowiki>}}<br />
<br />
=== Chromium ignores integrated graphics for acceleration ===<br />
<br />
Chromium and friends will try to detect as many GPUs as they can in the system and pick which one is preferred (usually discrete NVIDIA/AMD graphics). It tries to pick a GPU by looking at PCI devices, not OpenGL renderers available in the system - the result is that Chromium may ignore the integrated GPU available for rendering and try to use the dedicated GPU bound to the {{ic|vfio-pci}} driver, and unusable on the host system, regardless of whenever a guest VM is running or not. This results in software rendering being used (leading to higher CPU load, which may also result in choppy video playback, scrolling and general un-smoothness).<br />
<br />
This can be fixed by [[Chromium/Tips and tricks#Forcing specific GPU|explicitly telling Chromium which GPU you want to use]].<br />
<br />
=== VM only uses one core ===<br />
<br />
For some users, even if IOMMU is enabled and the core count is set to more than 1, the VM still only uses one CPU core and thread. To solve this enable "Manually set CPU topology" in {{ic|virt-manager}} and set it to the desirable amount of CPU sockets, cores and threads. Keep in mind that "Threads" refers to the thread count per CPU, not the total count.<br />
<br />
=== Passthrough seems to work but no output is displayed ===<br />
<br />
Make sure if you are using virt-manager that UEFI firmware is selected for your virtual machine. Also, make sure you have passed the correct device to the VM.<br />
<br />
=== virt-manager has permission issues ===<br />
<br />
If you are getting a permission error with virt-manager add the following to your {{ic|/etc/libvirt/qemu.conf}}:<br />
<br />
group="kvm"<br />
user="''user''"<br />
<br />
If that does not work make sure your user is added to the {{ic|kvm}} and {{ic|libvirt}} [[user group]]s.<br />
<br />
=== Host lockup after VM shutdown ===<br />
<br />
This issue seems to primarily affect users running a Windows 10 guest and usually after the VM has been run for a prolonged period of time: the host will experience multiple CPU core lockups (see [https://bbs.archlinux.org/viewtopic.php?id=206050&p=2]). To fix this try enabling Message Signal Interrupts on the GPU passed through to the guest. A good guide for how to do this can be found in [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts.378044/]. You can also download this application for windows here [https://github.com/TechtonicSoftware/MSIInturruptEnabler] that should make the process easier.<br />
<br />
=== Host lockup if guest is left running during sleep ===<br />
<br />
VFIO-enabled virtual machines tend to become unstable if left running through a sleep/wakeup cycle and have been known to cause the host machine to lockup when an attempt is then made to shut them down. In order to avoid this, one can simply prevent the host from going into sleep while the guest is running using the following libvirt hook script and systemd unit. The hook file needs executable permissions to work.<br />
<br />
{{hc|/etc/libvirt/hooks/qemu|<nowiki><br />
#!/bin/bash<br />
<br />
OBJECT="$1"<br />
OPERATION="$2"<br />
SUBOPERATION="$3"<br />
EXTRA_ARG="$4"<br />
<br />
case "$OPERATION" in<br />
"prepare")<br />
systemctl start libvirt-nosleep@"$OBJECT"<br />
;;<br />
"release")<br />
systemctl stop libvirt-nosleep@"$OBJECT"<br />
;;<br />
esac<br />
</nowiki>}}<br />
<br />
{{hc|/etc/systemd/system/libvirt-nosleep@.service|<nowiki><br />
[Unit]<br />
Description=Preventing sleep while libvirt domain "%i" is running<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=/usr/bin/systemd-inhibit --what=sleep --why="Libvirt domain \"%i\" is running" --who=%U --mode=block sleep infinity<br />
</nowiki>}}<br />
<br />
=== Cannot boot after upgrading ovmf ===<br />
<br />
If you cannot boot after upgrading from {{Pkg|ovmf}} version 1:r23112.018432f0ce-1 then you need to remove the old {{ic|*VARS.fd}} file in {{ic|/var/lib/libvirt/qemu/nvram/}}:<br />
<br />
# mv /var/lib/libvirt/qemu/nvram/vmname_VARS.fd /var/lib/libvirt/qemu/nvram/vmname_VARS.fd.old<br />
<br />
See {{Bug|57825}} for further details.<br />
<br />
=== QEMU via cli pulseaudio stuttering/delay ===<br />
<br />
Using following flags for the audio device and chipset might help if you are running into the stuttering/delay audio issues when running QEMU via cli:<br />
<br />
qemu-system-x86_64 \<br />
-machine pc-i440fx-3.0 \<br />
-device hda-micro \<br />
-soundhw hda \<br />
-...<br />
<br />
As noted in [[#QEMU 3.0 audio changes|QEMU 3.0 audio changes]] the specified chipset will include a series of audio patches.<br />
<br />
Setting {{ic|QEMU_AUDIO_TIMER_PERIOD}} to values higher than 100 might also help (did not test value lower than 100).<br />
<br />
=== Bluescreen at boot since Windows 10 1803 ===<br />
<br />
Since Windows 10 1803 there is a problem when you are using "host-passthrough" as cpu model. The machine cannot boot and is either boot looping or you get a bluescreen.<br />
You can workaround this by:<br />
<br />
# echo 1 > /sys/module/kvm/parameters/ignore_msrs<br />
<br />
To make it permanently you can create a modprobe file {{ic|kvm.conf}}:<br />
<br />
options kvm ignore_msrs=1<br />
=== AMD Ryzen / BIOS updates (AGESA) yields "Error: internal error: Unknown PCI header type ‘127’" ===<br />
<br />
AMD users have been experiencing breakage of their KVM setups after updating the BIOS on their motherboard. There is a kernel [https://clbin.com/VCiYJ patch], (see [[Kernel/Arch Build System]] for instruction on compiling kernels with custom patches) that can resolve the issue as of now (7/28/19), but this is not the first time AMD has made an error of this very nature, so take this into account if you are considering updating your BIOS in the future as a VFIO user.<br />
=== Host crashes when hotplugging Nvidia card with USB ===<br />
<br />
If attempting to hotplug an Nvidia card with a USB port, you may have to blacklist the {{ic|i2c_nvidia_gpu}} driver. Do this by adding the line {{ic|blacklist i2c_nvidia_gpu}} to {{ic|/etc/modprobe.d/blacklist.conf}}.<br />
<br />
== See also ==<br />
<br />
* [https://bbs.archlinux.org/viewtopic.php?id=162768 Discussion on Arch Linux forums] | [https://archive.is/kZYMt Archived link]<br />
* [https://docs.google.com/spreadsheet/ccc?key=0Aryg5nO-kBebdFozaW9tUWdVd2VHM0lvck95TUlpMlE User contributed hardware compatibility list]<br />
* [https://pastebin.com/rcnUZCv7 Example script] from https://www.youtube.com/watch?v=37D2bRsthfI<br />
* [https://vfio.blogspot.com/ Complete tutorial for PCI passthrough]<br />
* [https://www.redhat.com/archives/vfio-users/ VFIO users mailing list]<br />
* [ircs://chat.freenode.net/vfio-users #vfio-users on freenode]<br />
* [https://www.youtube.com/watch?v=aLeWg11ZBn0 YouTube: Level1Linux - GPU Passthrough for Virtualization with Ryzen]<br />
* [https://www.reddit.com/r/VFIO /r/VFIO: A subreddit focused on vfio]<br />
* [https://github.com/intel/gvt-linux/wiki/GVTd_Setup_Guide GVT-d: passthrough of an entire integrated GPU]</div>Muatahttps://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF/Examples&diff=602352PCI passthrough via OVMF/Examples2020-03-22T18:34:29Z<p>Muata: /* Muata's VFIO setup */ Description update</p>
<hr />
<div>[[Category:Virtualization]]<br />
As PCI passthrough is quite tricky to get right (both on the hardware and software configuration sides), this page presents '''working, complete''' VFIO setups. Feel free to look up users' scripts, BIOS/UEFI configuration, configuration files and specific hardware. If you have a problem, it might have been stumbled upon by other VFIO users and fixed in the examples below.<br />
<br />
{{note|If you have got VFIO working properly, please post your own setup according to the template on the bottom.}}<br />
<br />
== Users' setups ==<br />
<br />
=== mstrthealias: Intel 7800X / X299, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7800X CPU <br />
* '''Motherboard''': ASRock X299 Taichi (Revision: A, BIOS/UEFI Version: 1.60A)<br />
* '''GPU''': Asus STRIX GTX 1070<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version 4.14.8-1-skx (patched crystal_khz=24000).<br />
** Custom patches:<br />
*** skylakex-crystal_khz-24000.patch (see below)<br />
** Patches used from linux-ck:<br />
*** enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v4.13+.patch<br />
*** 0001-add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by.patch<br />
*** 0001-e1000e-Fix-e1000_check_for_copper_link_ich8lan-retur.patch<br />
*** 0002-dccp-CVE-2017-8824-use-after-free-in-DCCP-code.patch<br />
** Config:<br />
*** PREEMPT, NO_HZ_IDLE, 300HZ, MSKYLAKE<br />
* GitHub: Link TBD<br />
* Benchmarks: https://imgur.com/a/hIfQD<br />
* Using '''libvirt/QEMU''': libvirt 3.10.0 / QEMU 2.11.0<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
** Skylake-X default clock incorrect in 4.14.8 (https://bugzilla.kernel.org/show_bug.cgi?id=197299)<br />
*** Was unable to resolve timing issue using adjtimex<br />
*** Patching kernel source to '''crystal_khz = 24000''' resolved timing/performance issues<br />
** Enable 'Intel SpeedShift' in BIOS, installed '''cpupower'''', set governor='performance'<br />
*** Verify: dmesg|grep HWP<br />
**** intel_pstate: HWP enabled<br />
** Enable HT in BIOS<br />
** Enable 'deadline' IO sceduler:<br />
*** echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="deadline"' >> /etc/udev/rules.d/60-schedulers.rules<br />
** Bypass x2apic opt-out:<br />
*** GRUB_CMDLINE_LINUX="... intremap=no_x2apic_optout ..."<br />
** Isolate cores for Windows VM:<br />
*** GRUB_CMDLINE_LINUX="... isolcpus=2-5,8-11 nohz_full=2-5,8-11 rcu_nocbs=2-5,8-11 ..."<br />
** Use hugepages (2MB) for all VM memory allocation<br />
** memoryBacking: <hugepages/><nosharepages/><locked/><access mode='private'/><allocation mode='immediate'/><br />
** Extracted rom from GPU; used for <rom file=../> config<br />
** Using MSI for GPU and GPU Audio (configured in Windows registry; FPS seems same as using line-based interrupts)<br />
* Hardware setup<br />
** PCIE1: NVIDIA GeForce GT 710B (for host)<br />
** Onboard: ASRock XHCI 3.1 USB (for host)<br />
** Onboard: Intel I219 NIC (bridged)<br />
** PCIE3: Asus Xonar STX (passthrough to Win10)<br />
** PCIE5: NVIDIA GeForce GTX 1070 (passthrough to Win10)<br />
** M2_1: Samsung 960 EVO 500GB (passthrough to Win10)<br />
** Onboard: Intel XHCI USB 3.0 (passthrough to Win10)<br />
** Onboard: Intel HDA (passthrough to Win10)<br />
** Onboard: Intel I211 NIC (passthrough to Win10)<br />
** Onboard: ASRock AHCI SATA A1/A2 (passthrough to Linux)<br />
<br />
=== DragoonAethis: 6700K, GA-Z170X-UD3, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K (using iGPU as the host GPU)<br />
* '''Motherboard''': Gigabyte GA-Z170X-UD3 (Revision 1.0, BIOS/UEFI Version: F23d)<br />
* '''GPU''': MSI GeForce 1070 Gaming X (10Gbps)<br />
* '''RAM''': 16GB DDR4 2400MHz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': "Vanilla" Linux (no ACS patch needed).<br />
* Using '''libvirt''': XML domain, helper scripts, IOMMU groups, etc available in [https://github.com/DragoonAethis/VFIO my VFIO repository].<br />
* '''Guest OS''': Windows 8.1 Pro.<br />
* The entire HDD is passed to the VM as a raw device (formatted as a single NTFS partition).<br />
* USB keyboard and mouse are passed to the guest VM and shared with the host with Synergy.<br />
* Virtualized audio: PulseAudio -> local Unix socket. Previously, I've had a bit more complex setup in which PA on the host was configured to accept TCP connections, and the envvars required for QEMU to use PA were pointed at the PA server running on 127.0.0.1. This way it was not required to change the QEMU user (exact details in the repo), but introduced other minor issues I've resolved later.<br />
* Bridged networking (with NetworkManager's and [https://www.happyassassin.net/2014/07/23/bridged-networking-for-libvirt-with-networkmanager-2014-fedora-21/ this tutorial's] help) is used. {{ic|bridge0}} is created, {{ic|eth0}} interface is bound to it. STP disabled, VirtIO NIC is configured in the VM and that VM is seen in the network just as any other computer (and is being assigned an IP address from the router itself, can communicate freely with other computers).<br />
* For some reason, enabling intel_iommu=on on the kernel cmdline without CSM support enabled in UEFI causes a black screen on boot. Enable it (Windows 8/10 features need to be enabled to show "CSM Support", selecting "Other OS" hides that).<br />
<br />
=== Manbearpig3130's Virtual Gaming Machine ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6850K 3.6GHz<br />
* '''Motherboard''': Gigabyte x99-Ultra Gaming (Revision 1.0, BIOS/UEFI Version: F4)<br />
* '''Host GPU''': AMD Radeon HD6950 1GB<br />
* '''Guest GPU''': AMD R9 390 8GB<br />
* '''RAM''': 32GB G-Skill Ripjaws DDR4 runing at 3200MHz<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.7.2-1.<br />
* Using '''libvirt QEMU/KVM with OVMF''': link to domain XMLs/scripts/notes: https://github.com/manbearpig3130/MBP-VT-d-gaming-machine<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 2x 480GB SSDs set up in LVM striped mode (with mdadm) formatted to ext4 are mounted in linux which contains the guest's qcow2 virtual VirtIO disk file.<br />
* USB Host controller is passed through, giving most USB ports to the VM, leaving my USB 3.1 controller with attached USB hub for the host.<br />
* Motherboard has two NICs, one is passed into VM (Works perfectly after installing Killer NIC Driver).<br />
* VM gets dedicated 16GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably.<br />
* Windows boots straight into Steam big picture mode on primary display (43" Sony Bravia). Overall an awesome gaming machine that meets my gaming needs and lust for GNU/Linux at the same time.<br />
* '''Quirks''':<br />
* I sometimes have to reinstall the AMD drivers in Windows to get HDMI audio working properly, or roll back to Windows HDMI driver. I normally use a USB headset which works fine anyway.<br />
<br />
=== Bretos' Virtual Gaming Setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-7700k<br />
* '''Motherboard''': Z270 GAMING M3 (MS-7A62)<br />
* '''GPU''': ASUS GeForce GTX960<br />
* '''RAM''': Kingston HyperX 3x8GB DDR4 2.4GHz<br />
* '''Storage''': 2x Corsair MP500 m.2 240G SSDs in mdadm RAID0, 1x WD Black 1TB for storage. 100GB LVM volume as writeback cache for HDD <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': vanilla<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* Using '''libvirt/QEMU''': GitHub config repository: [https://github.com/Bretos/vfio]<br />
* Issues you have encountered: AUDIO. Had to get USB audio adapter and pass it through.<br />
* No issues other than audio. Works like a charm.<br />
<br />
=== Skeen's Virtual Gaming Rack Machine ===<br />
<br />
Still work in progress.<br />
<br />
Hardware:<br />
<br />
* '''CPU''': AMD FX(tm)-8350<br />
* '''Motherboard''': MSI 970A SLI Krait Edition (MS-7693) (Revision 5.0, BIOS/UEFI Version: 25.4)<br />
* '''Host GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''Guest GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''RAM''': 2x8GB Kingston HyperX Fury White DDR3 1866MHz<br />
* '''Storage''': 2x250GB Samsung EVO (MZ-75E250) set up in LVM striped mode (with mdadm), 2x1TB WD Blue (WDC_WD10SPCX) for storage. 250GB LVM volume as writeback cache for HDD.<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Linux 4.9.0-3 (No ACS)<br />
* '''Host OS''': Debian Stretch<br />
* '''Guest OS''': Windows 10 Home (10_1703_N, International Edition)<br />
* Using '''libvirt QEMU/KVM with OVMF''': See [https://github.com/Skeen/libvirt-gpu-passthrough Github]<br />
<br />
Issues you have encountered:<br />
<br />
* Identifical GPUs; solved [[PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs|using this section on the wiki]], but with the script from the [[Talk:PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs_-_did_not_work_for_me.|corresponding discussion page]]. Several adaptations for Debian were required too, but are not applicable for this forum.<br />
* "Error 43: Driver failed to load";<br />
** Spoofing vendor_id caused Windows to crash during boot-up.<br />
** Linux VMs complain unable to find GPU from Grub2, and booted into blind-mode, but would still pick up the graphics card during the boot process, and would remain functional until VM reboot.<br />
** Vendor_id spoofing turned out to work after solving the real problem (Missing UEFI compatability in VBIOS).<br />
* Missing UEFI (OVMF) compatability in VBIOS;<br />
** Requested a GOP/UEFI compatible VBIOS upgrade from ASUS, but ASUS could neither understand the request, or provide the upgrade (The only thing supplied was standard support answers).<br />
** No compatible VBIOS was found at [https://www.techpowerup.com/vgabios/ TechPowerUp].<br />
** Finally solved by manually hacking GOP/UEFI support into the ROM, using [http://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html GOPupd]. Current rom was dumped within a Windows 10 VM using GPU-Z, then modified using GOPupd, pulled to Linux, and provided using the rom file parameter in the VM XML file.<br />
* VM only uses one core (even with mode=host-passthrough): solved [[PCI_passthrough_via_OVMF#VM_only_uses_one_core|using this section on the wiki]].<br />
<br />
Quirks:<br />
<br />
* The GPU that is being passed through, [[PCI_passthrough_via_OVMF#Passing_through_a_device_that_does_not_support_resetting|does not support resetting]], and thus doing a hard-reboot / shutdown of the VM locks the GPU.<br />
** The VM cannot be started again unless the Host machine is rebooted.<br />
*** When doing a clean reboot / shutdown, allows the VM to start up as expected without reboot..<br />
** Removing and rescanning the PCI device, does not change anything.<br />
** No further attempts at powercycling the GPU from the host has been done (Yet).<br />
* [[PCI_passthrough_via_OVMF#Passing_VM_audio_to_host_via_PulseAudio|Passing VM audio to host via PulseAudio]] results in heavy crackling.<br />
** Using [[PCI_passthrough_via_OVMF#Slowed_down_audio_pumped_through_HDMI_on_the_video_card|Message-Signaled Interrupts]] have not been attempted (Yet).<br />
<br />
=== droserasprout poor man's setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i3-6100<br />
* '''Motherboard''': ASRock H110M2 D3 (BIOS version 0603)<br />
* '''Host GPU''': Intel HD 530<br />
* '''Guest GPU''': Sapphire Radeon R7 360<br />
* '''RAM''': Apacer 8Gb 75.C93DE.G040C, Kingston 4Gb 99U5401-011.A00LF<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-lts 4.9.67-1 (vanilla)<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro 1709 (build 16299.98)<br />
* Using '''libvirt/QEMU''': See my configs and IOMMU groups on [https://github.com/droserasprout/win10-vfio-configs Github]<br />
* HDD partition is passed to the VM as a raw virtio device.<br />
* HD Audio is passed too. Works fine with both playing and recording, no latency issues or glitches. After VM is powered off host audio works fine too.<br />
* Guest's latency is slightly better when CPU cores are isolated for VM.<br />
* i2c-dev module added to bypass 'EDID signature' error when switching HDMI. Without it I had to switch video output before starting VM for some reason.<br />
* intremap=no_x2apic_optout kernel option added to bypass motherboard firmware falsely reporting x2APIC method is not supported. Seems to have a strong influence on the guest's latency.<br />
* Overall performance is pretty close to the native OS setup.<br />
<br />
=== prauat: 2xIntel(R) Xeon(R) CPU E5-2609 v4, 2xGigabyte GeForce GTX 1060 6GB G1 Gaming, Intel S2600CWTR ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''':2xIntel(R) Xeon(R) CPU E5-2609 v4 <br />
* '''Motherboard''': Intel S2600CWTR(Revision ???, BIOS/UEFI Version: SE5C610.86B.01.01.0022.062820171903)<br />
* '''GPU''': 2xGigabyte GeForce GTX 1060 6GB G1 Gaming [GeForce GTX 1060 6GB] (rev a1)<br />
* '''RAM''': Samsung M393A2G40EB1-CPB 2133 MHz 64GB (4x16GB)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Linux 4.14.15-1-ARCH #1 SMP PREEMPT<br />
* Using '''libvirt/QEMU''': https://github.com/prauat/passvm/blob/master/generic.xml<br />
* Most important:<br />
* When using nvidia driver hide virtualization to guest <kvm><hidden state='on'/></kvm><br />
* Configuration works with Arch Linux guest os, still work in progress.<br />
<br />
=== Dinkonin's virtual gaming/work setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7700K CPU @ 4.60GHz<br />
* '''Motherboard''': MSI Z270 GAMING PRO CARBON (MS-7A63) BIOS Version: 1.80<br />
* '''GPU''': 1x Gigabyte GeForce GTX 1050 2GB (host), 1x MSI GeForce 1080 AERO 8GB(guest)<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version linux 4.15.2-2-ARCH.<br />
* Using '''libvirt/QEMU (patched from AUR) with OVMF'''<br />
* Installed qemu-patched from AUR because of crackling/delayed sound with pulseaduio (still hear ocasional pops/clicks while gaming.<br />
* Patched video bios with https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher, because of error: <br />
vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff<br />
* Single monitor setup, implemented full software KVM(for host and guest) described here: https://rokups.github.io/#!pages/full-software-kvm-switch.md<br />
<br />
=== pauledd's unexeptional setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7 6700K<br />
* '''Motherboard''': Gigabyte GA-Z170N-WIFI Retail (Revision 1.0 , BIOS/UEFI Version: F20)<br />
* '''GPU''': 8GB Palit GeForce GTX 1070 Dual Aktiv PCIe 3.0 x16 (Retail)<br />
* '''RAM''': 16GB G.Skill RipJaws V DDR4-3200 DIMM CL16 Dual Kit<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.15.2-gentoo<br />
* Using '''libvirt/QEMU''': libvirt-4.0.0, qemu-2.11.1, https://github.com/pauledd/GPU-Passthrough/blob/master/win10-2.xml , using vfio kernel module<br />
* Had to dump VBIOS in at the host while GPU was normally attached (and drivers loaded) (see https://stackoverflow.com/a/42441234), had to set CPU settings manually according to my cpu (host-passthrough, sockets 1, cores: 4, threads: 2 ) or some games will regularly crash, see my xml how to insert vbios, still have audio clicking/lag with pulseaudio but thats ok for me, no further patching etc.. works out of the box without any issues.<br />
* 3DMark Results Time Spy Graphic Score: Native Windows 10: 5564 , GPU-Passthrough: 5541<br />
<br />
=== hkk's Windows gaming machine (6700K, 1070, 16GB) ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K 4.5GHz<br />
* '''Motherboard''': AsRock Fatality Gaming K6 Z170 (rev. 1.05)<br />
* '''Host GPU''': Intel GPU HD530 with 1GB shared memory<br />
* '''Guest GPU''': Gigabyte GeForce GTX1070 G1 Gaming 8GB<br />
* '''RAM''': 16GB G.Skill RipjawsV @ 3333 MHz CL14-15-15-31-2T [DDR4]<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.15.7-1-vfio (with ACS patch included).<br />
* Using '''libvirt QEMU/KVM with OVMF'''<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 128GB Intel 600p SSD splited into 3 partitions: 512MB for EFI, 30GB for / in Btrfs and other gigs for Windows 10 installed straight on SSD.<br />
* Two more HDDs for Windows. 1TB and 650GB<br />
* Passed specific devices like X360 and some of single USB ports.<br />
* One NIC behind NAT on VM machine. <br />
* VM gets dedicated 8GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably and machine gets 4/4 cores of my 4/8 CPU<br />
* Windows boots on second screen with simple script which shutting down display with xrandr.<br />
* Using Synergy to share mouse and keyboard between systems.<br />
* '''Quirks''':<br />
* Synergy is not perfect and will not entirely work in some games.<br />
* No boot screen. Display is turning on only when Windows is up and ready to go.<br />
<br />
=== sitilge's treachery ===<br />
<br />
Full info: https://git.sitilge.id.lv/sitilge/dotfiles<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i5 6600K<br />
* '''Motherboard''': Asus Z170i<br />
* '''GPU''': Gigabyte Radeon RX460 OC 2GB<br />
* '''Storage''': Samsung 850 EVO 500GB<br />
* '''RAM''': Corsair 16GB DDR4<br />
* '''Mouse, Keyboard''': Logitech M90, Vortex Pok3r<br />
<br />
Host Configuration:<br />
<br />
* '''Kernel''': linux-vfio<br />
* '''Packages''': qemu-git, virtio-win, ovmf<br />
<br />
Guest Configuration:<br />
<br />
* '''OS''': Windows 10 Pro<br />
* '''CPU''': host<br />
* '''Motherboard''': host<br />
* '''GPU''': passthrough<br />
* '''Storage''': 64GB<br />
* '''RAM''': 8GB<br />
* '''Mouse, Keyboard''': passthrough<br />
<br />
Notes:<br />
<br />
* You can easy simlink the config files using {{ic|stow -t / boot mkinitcpio}} and then {{ic|mkinitcpio -p linux-vfio}}.<br />
* {{ic|-smp cores&#61;4}} - guest might utilize only one core otherwise.<br />
* {{ic|-soundhw ac97}} - I'm passing mobo audio thus ac97. Download, unzip and install the Realtek AC97 drivers within a guest.<br />
* Use virtio drivers for both block devices and network. For example, the ping went down from 250 to 50.<br />
* Mouse and keyboard passthrough solved the terrible lag problem which was present in emulation mode.<br />
* Make sure virtualization is supported and enabled in your firmware (UEFI). The option was hidden in a submenu in my case.<br />
* As trivial as it sounds, check your cables.<br />
* Be patient - it took more than 10 minutes for the guest to recognize the GPU.<br />
<br />
=== chestm007's hackery ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 7 1800x<br />
* '''Motherboard''': Asus ROG Crosshair VI (Revision 1, BIOS/UEFI Version: 3502)<br />
* '''GPU''': Asus ROG RX480oc 8GB<br />
* '''RAM''': 32gb Ripjaws 2400mhz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.16.12-1-ARCH.<br />
* Using '''libvirt/QEMU''': libvirtd (libvirt) 4.3.0, QEMU emulator version 2.12.0, <br />
<br />
Notes: <br />
<br />
* using ic6 audio - works fine for me.<br />
* have a working looking-glass setup, however cant get spice to pass through keyboard and mouse, currently using a mixture of synergy and a dedicated screen as a workaround<br />
<br />
=== Eduxstad's Infidelity ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 2600X @ 3.7 GHZ <br />
* '''Motherboard''': ASUS PRIME B350-PLUS(BIOS/UEFI Version: 4011)<br />
* '''GPU1 (Guest)''': MSI 390 8GB @ Stock<br />
* '''GPU2 (Host)''': XFX 550 4GB @ Stock<br />
* '''RAM''': 2 x 8GB (16GB) @ 3000 HZ<br />
* '''Guest OS''': Windows 8.1 Embedded Pro<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.17.3-1-ARCH (vanilla).<br />
* Using '''libvirt/QEMU''': libvirt/virt-manager (https://github.com/eduxstad/vfio-config).<br />
* Look in the repository for complete documentation of extra steps taken<br />
* Overview: VM managed using virt-manager, using looking glass for primary io and built in spice display server as backup. Passing vm audio back to pulseaudio. Using hugepages for RAM. SCSI Drivers installed for hardware drive support.<br />
<br />
=== Pi's vr-vm ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8700k @ 4.8 GHz<br />
* '''Motherboard''': MSI Gaming Pro Carbon (BIOS/UEFI Version: A.40/5.12)<br />
* '''GPU''': Palit RTX 2080 Ti<br />
* '''RAM''': 4x8GB G.Skill DDR4 @ 3000 MHz<br />
<br />
Configuration:<br />
<br />
* Kernel: latest mainline (rc if available)<br />
** custom built with ZFS, WireGuard<br />
** ''CONFIG_PREEMPT_VOLUNTARY=y'' to work around QEMU bug with long guest boot times<br />
* Startup scripts/additional info: https://github.com/PiMaker/Win10-VFIO<br />
* Issues encountered:<br />
** PUBG would not launch at all<br />
*** Solution: Enable the HyperV clock with <timer name='hypervclock' present='yes'/> and disable hpet with <timer name='hpet' present='no'/><br />
** VR would start to stutter badly after about 20-30 minutes of playtime (this one took me about 2 weeks to finally figure out :-)<br />
*** Solution:<br />
**** Enable invariant tsc passthrough with <feature policy='require' name='invtsc'/> (required even if using host-passthrough!)<br />
**** Enable MSI for the GPU (using tool from [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ here])<br />
**** Enable vAPIC and synic in the HyperV configuration<br />
**** Manually move all IRQs to host cores using qemu_fifo.sh script from my GitHub repo above<br />
* Overview: SteamVR-capable gaming and workstation rig, passing through NVIDIA GPU and onboard USB-controller (leaving an additional ASMedia USB port to the host). 22 GB hugepages memory, 10 of 12 cores (with SMT) passed through. Audio working via Scream (https://github.com/duncanthrax/scream) - with IVSHMEM, surprisingly low latency and no stutters.<br />
<br />
=== coghex's gaming box ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8086k @ 5.0 GHZ (8086k is just a binned 8700k)<br />
* '''Motherboard''': GIGABYTE Z370 AORUS Gaming 7 rev1.0 (BIOS/UEFI Version: F15a)<br />
* '''GPU''': GIGABYTE GV-N108TAORUSX WB-11GD AORUS GeForce GTX 1080 Ti Waterforce WB Xtreme Edition 11G @ ~2Ghz<br />
* '''RAM''': 4 x 8GB (32GB) Corsair Dominator Platinum @ 3600 HZ (XMP)<br />
Configuration:<br />
<br />
* '''Kernel''': linux-zen-5.5.8.zen1-1<br />
* '''Modules''': raid0 raid1 md_mod ext4 vfat ahci vfio_pci vfio vfio_iommu_type1 vfio_virqfd usbhid it87 (aur version is unmaintained and the support for the ITE8686E chip on this board is limited, replace it87 source with that which is found [https://github.com/andreychernyshev/it87-8613E here] for more comprehensive support)<br />
* '''Virsh''': virsh-5.10.0<br />
* '''Qemu''': qemu-system-x86_64-4.2.0 machine='pc-i440fx-4.2'<br />
* '''Performance Services''': [[Improving_performance#irqbalance|irqbalance-1.6.0]], [[Improving_performance#Ananicy|ananicy-git-2.1.0.r22]], [[CPU_frequency_scaling#cpupower|cpupower5.5-1]]<br />
* EDIT(2020): much has changed since this setup was posted years ago and a custom kernel is no longer needed on this hardware, everything works perfectly...<br />
* scripts, libvirt XML, and personal configs can be found here: https://github.com/coghex/hoest<br />
* host boot options: intel_iommu=on iommu=pt rd.driver.pre=vfio-pci acpi_enforce_resources=lax<br />
* systemd modprobe.d options: kvm ignore_msrs=1 (avoids critical bugs), kvm report_ignored_msrs=N (cleans up journal logs)<br />
* libvirt features: acpi, apic, kvm hidden state='on', vmport state='off'<br />
* guest hyper-v options: hv-relaxed, hv-vapic, hv-spinlocks (retries='8191'), hv-vpindex, hv-runtime, hv-synic, hv-stimer, hv-stimer-direct, hv-reset, hv-vendor_id (value='1234567890ab'), hv-frequencies, hv-reenlightenment, hv-tlbflush, hv-ipi, (hv-evmcs and hv-no-nonarch-coresharing seemingly do not work yet in virsh)<br />
* make sure to use the multifunction field for the GPU's hdmi audio controller and set them to the same slot, otherwise the audio interrupts will hang. Someone should probably add that to the guide...<br />
* im running the clock at 100Hz, the people running it at 1000 with the zen or ck kernel should know that the MuQSS scheduler works the same regardless of this speed and 1000 will just add more useless interrupts.<br />
* cpu pinning works best for single VM performance, default host-passthrough works best for multiple running VMs.<br />
* on windows, the [https://github.com/CHEF-KOCH/MSI-utility MSI_util_v2] gets used every update to reset MSI interrupts on the GPU.<br />
Hardware Specific:<br />
<br />
* '''Fully-Functional Passthrough Devices''': this motherboard has many PCI slots, all of these devices have been working flawlessly with little setup for years now:<br />
** Inatek USB Card: KT5001 [https://www.amazon.com/Inateck-Express-15-Pin-Connector-KT5001/dp/B00FPIMJEW]<br />
** Creative Sound Card: 70SB155000001 [https://www.amazon.com/Creative-Labs-70SB155000001-Blaster-PCI-Express/dp/B01LYT7U99]<br />
** EDUP WiFi Card: AC9636GS (must use virtio usb passthrough for bluetooth functionality) [https://www.amazon.com/EDUP-3000Mbps-802-11AX-Bluetooth-EP-AC9636GS/dp/B082F5D4SM]<br />
** Intel Optane SSD: SSDPED1D480GASX [https://www.amazon.com/Intel-Optane-900P-480GB-XPoint/dp/B0772T4BVZ]<br />
** Zotac GeForce GT 710: ZT-71304-20L (this one does not seem to be available on amazon anymore, a shame since its one of the few high performance PCIEx1 cards...) [https://www.amazon.com/ZOTAC-GeForce-Profile-Graphic-ZT-71304-20L/dp/B01E9Z2D60]<br />
* none of the proprietary gigabyte software works, in fact, it blue screens windows and installs itself as a startup program, forever locking you out.<br />
* if anyone else uses this exact motherboard, there are two internal USB IOMMU groups, even with the ACS patch. one will include the usb labeled "USB 3.1", and the other will include all the other USBs. this means if you want more than just a keyboard and mouse, you will need either a usb hub to plug into the 3.1 slot and passthrough, or a PCIE USB bus.<br />
* the two ethernet ports are in different IOMMU groups, making this a perfect motherboard for vfio.<br />
* the ACS patch is needed on this motherboard if you want to use two graphics cards at once in seperate IOMMU groups. this sets the main GPU to PCIEx8 instead of x16.<br />
<br />
=== Roobre's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (OC'ed to 4.50)<br />
* '''Motherboard''': ASUS ROG MAXIMUS VIII GENE, v3801<br />
* '''GPU''': EVGA GTX 1080Ti<br />
* '''RAM''': 32GB DDR4 2400 (2x Ballistix)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Latest -ARCH or -zen (4.17.10-1-zen at the time of writing)<br />
* Using '''libvirt/QEMU''': libvirt 4.5.0-1, qemu 2.12.0-2. Config: https://gist.github.com/roobre/d2d20cc638c5030f360b500000da0f88{{Dead link|2020|02|25}}<br />
* '''ZFS''' volumes passed as raw devices for hard drives.<br />
* '''VirtIO all the things!''' Download drivers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/<br />
<br />
Issues: <br />
<br />
* Pulseaudio never worked good (too much crackling), so I ended up passing-through an USB 3.1 PCI controller and connecting an USB audio card to it. That card is then connected to one of my MoBo's inputs, and echoed using pulseaudio's `loopback` module.<br />
<br />
* Synergy works really great. On some games (ones who take control of the mouse pointer, e.g. first-person), you need to lock the mouse cursor to the VM window to avoid issues (camera moving too fast).<br />
<br />
* Do not forget to add the needed snippet for the nvidia driver to run ([[PCI passthrough via OVMF#"Error 43: Driver failed to load" on Nvidia GPUs passed to Windows VMs]])<br />
<br />
=== laenco's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 9 3950X @ 4.15Ghz all-cores via PBO<br />
* '''Motherboard''': Asus ROG STRIX X470-F GAMING (BIOS/UEFI Version: 5406)<br />
* '''GPU1 (Guest)''': Palit GeForce GTX 1080 8GB @ Stock<br />
* '''GPU2 (Host)''': MSI RX 570 8GB @ Stock<br />
* '''RAM''': 4 x 16GB (64GB) @ 3333 MHz<br />
<br />
Configuration:<br />
<br />
* '''Guest OS''': Windows 10 Pro<br />
* '''Kernel''': 5.4.13-arch1-1-gc (-ck is also good). No ACS patch.<br />
* Using vanilla '''QEMU 4.2.0'''<br />
* AMD Ryzen currently (2020.01.20) got bugged with smp threads option - VM stuck on start.<br />
* Got classic Nvidia error 43 - classically fixed. But also added some cpu flags which are set automatically with kvm=on found here https://github.com/qemu/qemu/blob/master/target/i386/cpu.c#L4008<br />
* As pure qemu have no option to pin cpu cores and self threads - using python script "cpu_affinity" - credits to https://github.com/zegelin/qemu-affinity/ and also a copy in my repo. Requires debug-threads=on<br />
* Using dynamically allocated hugepages 2Mb<br />
* Hardly using VirtIO<br />
* Using hardware usb switch like Aten US224-AT and hdmi switch "many-to-one", which allow me to have one monitor, mouse, keyboard and some usb devices, and switch them by button between host and guest.<br />
* Repo with current major system config and script for VM could be found here https://github.com/laenco/vfio-config<br />
<br />
=== Poncho's VFIO setup ===<br />
<br />
'''Hardware:'''<br />
<br />
* '''CPU''': Ryzen 7 2700x @ stock (PBO)<br />
* '''Motherboard''': MSI B450-A PRO MAX (BIOS/UEFI version: 7B86vM5)<br />
* '''GPU1 (Guest)''': MSI GeForce GTX 1660 Ti Gaming X 6GB @ Stock<br />
* '''GPU2 (Host)''': AsRock RX 570 8GB @ Stock<br />
* '''RAM''': 2 x 16GB @ (currently) 2666MHz<br />
<br />
'''Configuration:'''<br />
<br />
* '''Guest OS''': Windows 10 Home<br />
* '''Kernel''': 5.4.17-1-MANJARO vanilla, no ACS patch<br />
* '''libvirt 5.10.0/QEMU 4.2.0''': [https://gist.github.com/jp1995/7427b00eae14aba91a6ee2ab0d17df0a/ win10.xml gist]<br />
<br />
'''Issues I have encountered:'''<br />
<br />
The main issue that plagued me for a while was stuttering / heavy performance loss while simultaneously running processes (read 30 firefox tabs and a twitch stream) on the host. I also had crashes. The crashes were occurring more often in more demanding games, and less often when the host was as idle as possible. I finally solved this by changing my RAM speed from 3466MHz to 2666MHz. I have had no crashes for 2 days of gaming and the performance loss when using the host is also less significant. I'll try slowly bumping the RAM speed back up step by step to find the point of instability and I'll edit this once I've found it.<br />
<br />
'''Describing setup loosely:''' <br />
<br />
* On the hardware side, my 620 Watt PSU is perfectly adequate, despite some early concerns. <br />
* 16 PCI lanes for the Guest card, 4 for the Host card. 8+8 is also a solution but I haven't had the need to try this.<br />
* Regarding the VM setup, I pinned and isolated 12 logical processors, leaving 4 to the host. The isolation was achieved using [https://rokups.github.io/#!pages/gaming-vm-performance.md/ these scripts.] I needed the git version of cpuset for it to work. The pinning alone didn't change performance at all.<br />
* Audio passthrough is done through the usual pulseaudio solution, I have no demonic interference, works almost perfectly. I have to plug my headset directly into the VM when I want my mic to not sound garbage. ICH9.<br />
* I did try enabling MSI on the GPU in an attempt to fix the crashes described above, but all I got was a small but significant reduction in performance. <br />
* Regarding input, I got a bit lucky. My motherboard has two USB 3 ports all alone in a single IOMMU group. I got a 4 port USB switch and the only complaint I have with it is that sometimes it doesn't pick up my mouse when switching back to the host<br />
* No trouble at all getting the NVIDIA gpu to run in a VM, used the general solution in the wiki, including <kvm><hidden state='on'/></kvm><br />
* As for storage, I just gave the VM a whole raw SATA SSD. Benchmarking shows about a 50% performance drop, but I haven't really noticed significantly longer loading times in games. In the future I might try reinstalling windows on a virtual image for cloning purposes and use the SSD as a game drive.<br />
* All in all, there is about a 10% performance loss in CPU intensive games, compared to bare metal. This is acceptable and I'm pretty happy with the system :)<br />
<br />
=== zane's not working box ===<br />
<br />
Hardware:<br />
<br />
* '''MacBook Pro 11,x''' (2014 Model)<br />
* '''CPU''': Intel Core i7-4770HQ<br />
* '''Motherboard''': Apple<br />
* '''GPU''': Iris Pro 5200 for host, GTX 1660 eGPU over Thunderbolt 2 for guest<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-vfio from aur 5.5.8<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* '''ovmf''': 1:r26976.bd85bf54c2<br />
* '''libvirt/QEMU''': [https://gist.github.com/xzn/ef338049c91d21e9c1900982b21d9d32 libvirt setup]; [https://gist.github.com/xzn/06760e0e7df6ca325d0f05979aeff3bd qemu setup]<br />
<br />
Description:<br />
* The qemu script include lines for setting up device mapped file for raw disk access. 3D Performace is about 40% to 80% native depending on the application, with periodic lag spike/stutter.<br />
<br />
Issues:<br />
* Use [https://github.com/0xbb/apple_set_os.efi apple_set_os.efi] or {{ic|spoof_osx_version}} with [https://www.rodsbooks.com/refind/configfile.html refind] to avoid black screen on start. This prevents Apple firmware from shutting down host iGPU when booting Linux/Windows.<br />
* CPU pinning for guest is mandatory as it removes majority of stutters. After that isolate host CPU cores and pin emulator/IO threads as well. [https://github.com/PiMaker/Win10-VFIO/blob/master/qemu_fifo.sh Pi's script] for pinning IRQ handlers also helps. Hugepages for memory helps.<br />
* Kernel parameters: {{ic|1=intel_iommu=on iommu=pt pcie_acs_override=downstream pci=realloc vfio-pci.ids=10de:2184,10de:1aeb,10de:1aec,10de:1aed,8086:0d01,8086:156d,8086:156c isolcpus=0-5 nohz_full=0-5 rcu_nocbs=0-5 default_hugepagesz=1G hugepagesz=1G hugepages=12 mitigations=off pcie_aspm=off module_blacklist=nvidia audit=0 loglevel=3 quiet}}. Everything starting with {{ic|1=mitigations=off}} are optional. {{ic|1=pci=realloc}} is mandatory or you will get {{ic|NVRM: This PCI I/O region assigned to your NVIDIA device is invalid: NVRM: BAR1 is 0M @ 0x0 (PCI:0000:0a:00.0)}} error in dmesg and Error 43 for the Nvidia driver in guest.<br />
* Add {{ic|vfio_pci vfio vfio_iommu_type1 vfio_virqfd}} to your {{ic|mkinitcpio.conf}} as normal. Add {{ic|1=options kvm ignore_msrs=1}} and {{ic|1=options kvm report_ignored_msrs=N}} to your {{ic|/etc/modprobe.d/kvm.conf}} as well.<br />
* For me ACSO patch is mandatory, available from linux-vfio aur.<br />
* Enabling MSI for guest GPU seemingly helps. Using {{ic|ioh3420}} device and passthrough GPU on top of that DOES NOT seem to help, while making PulseAudio output cracks badly. Setting {{ic|1=mixing-engine=off}} for PulseAudio also makes it cracks badly so consider USB soundcard if needed. (I personally use the sound out on my monitor from guest). While I'm not sure what this option does, setting {{ic|in.buffer-length}} on PulseAudio audiodev reduces cracks.<br />
<br />
=== Muata's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7 4790<br />
* '''Motherboard''': MSI B85M-G43 BIOS/UEFI Version: V3.9 (03/30/2015)<br />
* '''GPU''': NVIDIA GeForce GTX 1060 6GB (MSI Gaming+)<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-ck 5.5.11-1<br />
* Using '''libvirt/QEMU''': [https://github.com/Muata/VFIO VFIO setup];<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* No issues at moment of writing this.<br />
<br />
> I had some issues with the network, for example, I couldn't connect to Activision games servers (CoD: MW, Overwatch) but I've changed firewall settings from public to private and everything is good for now.<br />
<br />
> For the first time, I had windows on .raw image and disk was throttling a lot, I've set up raid0 on my 2 HDD's, then I created 3 partitions with LVM - 120GB for windows, 700GB for data(games), 700gb for Linux data and passthrough two of partition as Virtio-BLK. [https://wiki.archlinux.org/index.php/Software_RAID_and_LVM RAID&LVM]<br />
<br />
> Audio passthrough is done through the usual PulseAudio solution, works nicely.<br />
<br />
> For some people who, maybe looking how to passthrough GPU - because it's not obvious when you doing it for the first time and it's not on the wiki though, so when you pass a correct group of vfio-pci.ids then you need to add in (easiest way) a Virtual Machine Manager - Add hardware - PCI Host Device - You graphic card (for me it was 0000:01:00:0 NVIDIA Corporation GP106 [GeForce GTX 1060 6GB]).<br />
<br />
== Adding your own setup ==<br />
<br />
Add a new section with your nickname, CPU, motherboard and GPU models, then copy and paste this template to your section:<br />
<br />
{{bc|<nowiki><br />
Hardware:<br />
<br />
* '''CPU''': <br />
* '''Motherboard''': (Revision , BIOS/UEFI Version: )<br />
* '''GPU''': <br />
* '''RAM''': <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version (vanilla/CK/Zen/ACS-patched or not).<br />
* Using '''libvirt/QEMU''': link to domain XMLs/scripts/notes (Git repo preferred).<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
* Describe your setup loosely here, so that when other wiki users are looking for something, they can easily skim through available setups.<br />
</nowiki>}}<br />
<br />
Replace proper sections with your own data. Make sure to provide the exact motherboard model, revision (if possible - should be on both the motherboard itself and the box it came in) and BIOS/UEFI version you are using. Describe your exact software setup and add a link to your configuration files. (GitHub, GitLab, BitBucket, etc can host a public repository which you may update once in a while, but uploading them to pastebins is fine, too. '''Do not''' post the entire config file contents here.)</div>Muatahttps://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF/Examples&diff=602294PCI passthrough via OVMF/Examples2020-03-22T02:04:28Z<p>Muata: I've added my setup</p>
<hr />
<div>[[Category:Virtualization]]<br />
As PCI passthrough is quite tricky to get right (both on the hardware and software configuration sides), this page presents '''working, complete''' VFIO setups. Feel free to look up users' scripts, BIOS/UEFI configuration, configuration files and specific hardware. If you have a problem, it might have been stumbled upon by other VFIO users and fixed in the examples below.<br />
<br />
{{note|If you have got VFIO working properly, please post your own setup according to the template on the bottom.}}<br />
<br />
== Users' setups ==<br />
<br />
=== mstrthealias: Intel 7800X / X299, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7800X CPU <br />
* '''Motherboard''': ASRock X299 Taichi (Revision: A, BIOS/UEFI Version: 1.60A)<br />
* '''GPU''': Asus STRIX GTX 1070<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version 4.14.8-1-skx (patched crystal_khz=24000).<br />
** Custom patches:<br />
*** skylakex-crystal_khz-24000.patch (see below)<br />
** Patches used from linux-ck:<br />
*** enable_additional_cpu_optimizations_for_gcc_v4.9+_kernel_v4.13+.patch<br />
*** 0001-add-sysctl-to-disallow-unprivileged-CLONE_NEWUSER-by.patch<br />
*** 0001-e1000e-Fix-e1000_check_for_copper_link_ich8lan-retur.patch<br />
*** 0002-dccp-CVE-2017-8824-use-after-free-in-DCCP-code.patch<br />
** Config:<br />
*** PREEMPT, NO_HZ_IDLE, 300HZ, MSKYLAKE<br />
* GitHub: Link TBD<br />
* Benchmarks: https://imgur.com/a/hIfQD<br />
* Using '''libvirt/QEMU''': libvirt 3.10.0 / QEMU 2.11.0<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
** Skylake-X default clock incorrect in 4.14.8 (https://bugzilla.kernel.org/show_bug.cgi?id=197299)<br />
*** Was unable to resolve timing issue using adjtimex<br />
*** Patching kernel source to '''crystal_khz = 24000''' resolved timing/performance issues<br />
** Enable 'Intel SpeedShift' in BIOS, installed '''cpupower'''', set governor='performance'<br />
*** Verify: dmesg|grep HWP<br />
**** intel_pstate: HWP enabled<br />
** Enable HT in BIOS<br />
** Enable 'deadline' IO sceduler:<br />
*** echo 'ACTION=="add|change", KERNEL=="sd*[!0-9]|sr*", ATTR{queue/scheduler}="deadline"' >> /etc/udev/rules.d/60-schedulers.rules<br />
** Bypass x2apic opt-out:<br />
*** GRUB_CMDLINE_LINUX="... intremap=no_x2apic_optout ..."<br />
** Isolate cores for Windows VM:<br />
*** GRUB_CMDLINE_LINUX="... isolcpus=2-5,8-11 nohz_full=2-5,8-11 rcu_nocbs=2-5,8-11 ..."<br />
** Use hugepages (2MB) for all VM memory allocation<br />
** memoryBacking: <hugepages/><nosharepages/><locked/><access mode='private'/><allocation mode='immediate'/><br />
** Extracted rom from GPU; used for <rom file=../> config<br />
** Using MSI for GPU and GPU Audio (configured in Windows registry; FPS seems same as using line-based interrupts)<br />
* Hardware setup<br />
** PCIE1: NVIDIA GeForce GT 710B (for host)<br />
** Onboard: ASRock XHCI 3.1 USB (for host)<br />
** Onboard: Intel I219 NIC (bridged)<br />
** PCIE3: Asus Xonar STX (passthrough to Win10)<br />
** PCIE5: NVIDIA GeForce GTX 1070 (passthrough to Win10)<br />
** M2_1: Samsung 960 EVO 500GB (passthrough to Win10)<br />
** Onboard: Intel XHCI USB 3.0 (passthrough to Win10)<br />
** Onboard: Intel HDA (passthrough to Win10)<br />
** Onboard: Intel I211 NIC (passthrough to Win10)<br />
** Onboard: ASRock AHCI SATA A1/A2 (passthrough to Linux)<br />
<br />
=== DragoonAethis: 6700K, GA-Z170X-UD3, GTX 1070 ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K (using iGPU as the host GPU)<br />
* '''Motherboard''': Gigabyte GA-Z170X-UD3 (Revision 1.0, BIOS/UEFI Version: F23d)<br />
* '''GPU''': MSI GeForce 1070 Gaming X (10Gbps)<br />
* '''RAM''': 16GB DDR4 2400MHz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': "Vanilla" Linux (no ACS patch needed).<br />
* Using '''libvirt''': XML domain, helper scripts, IOMMU groups, etc available in [https://github.com/DragoonAethis/VFIO my VFIO repository].<br />
* '''Guest OS''': Windows 8.1 Pro.<br />
* The entire HDD is passed to the VM as a raw device (formatted as a single NTFS partition).<br />
* USB keyboard and mouse are passed to the guest VM and shared with the host with Synergy.<br />
* Virtualized audio: PulseAudio -> local Unix socket. Previously, I've had a bit more complex setup in which PA on the host was configured to accept TCP connections, and the envvars required for QEMU to use PA were pointed at the PA server running on 127.0.0.1. This way it was not required to change the QEMU user (exact details in the repo), but introduced other minor issues I've resolved later.<br />
* Bridged networking (with NetworkManager's and [https://www.happyassassin.net/2014/07/23/bridged-networking-for-libvirt-with-networkmanager-2014-fedora-21/ this tutorial's] help) is used. {{ic|bridge0}} is created, {{ic|eth0}} interface is bound to it. STP disabled, VirtIO NIC is configured in the VM and that VM is seen in the network just as any other computer (and is being assigned an IP address from the router itself, can communicate freely with other computers).<br />
* For some reason, enabling intel_iommu=on on the kernel cmdline without CSM support enabled in UEFI causes a black screen on boot. Enable it (Windows 8/10 features need to be enabled to show "CSM Support", selecting "Other OS" hides that).<br />
<br />
=== Manbearpig3130's Virtual Gaming Machine ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6850K 3.6GHz<br />
* '''Motherboard''': Gigabyte x99-Ultra Gaming (Revision 1.0, BIOS/UEFI Version: F4)<br />
* '''Host GPU''': AMD Radeon HD6950 1GB<br />
* '''Guest GPU''': AMD R9 390 8GB<br />
* '''RAM''': 32GB G-Skill Ripjaws DDR4 runing at 3200MHz<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.7.2-1.<br />
* Using '''libvirt QEMU/KVM with OVMF''': link to domain XMLs/scripts/notes: https://github.com/manbearpig3130/MBP-VT-d-gaming-machine<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 2x 480GB SSDs set up in LVM striped mode (with mdadm) formatted to ext4 are mounted in linux which contains the guest's qcow2 virtual VirtIO disk file.<br />
* USB Host controller is passed through, giving most USB ports to the VM, leaving my USB 3.1 controller with attached USB hub for the host.<br />
* Motherboard has two NICs, one is passed into VM (Works perfectly after installing Killer NIC Driver).<br />
* VM gets dedicated 16GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably.<br />
* Windows boots straight into Steam big picture mode on primary display (43" Sony Bravia). Overall an awesome gaming machine that meets my gaming needs and lust for GNU/Linux at the same time.<br />
* '''Quirks''':<br />
* I sometimes have to reinstall the AMD drivers in Windows to get HDMI audio working properly, or roll back to Windows HDMI driver. I normally use a USB headset which works fine anyway.<br />
<br />
=== Bretos' Virtual Gaming Setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-7700k<br />
* '''Motherboard''': Z270 GAMING M3 (MS-7A62)<br />
* '''GPU''': ASUS GeForce GTX960<br />
* '''RAM''': Kingston HyperX 3x8GB DDR4 2.4GHz<br />
* '''Storage''': 2x Corsair MP500 m.2 240G SSDs in mdadm RAID0, 1x WD Black 1TB for storage. 100GB LVM volume as writeback cache for HDD <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': vanilla<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* Using '''libvirt/QEMU''': GitHub config repository: [https://github.com/Bretos/vfio]<br />
* Issues you have encountered: AUDIO. Had to get USB audio adapter and pass it through.<br />
* No issues other than audio. Works like a charm.<br />
<br />
=== Skeen's Virtual Gaming Rack Machine ===<br />
<br />
Still work in progress.<br />
<br />
Hardware:<br />
<br />
* '''CPU''': AMD FX(tm)-8350<br />
* '''Motherboard''': MSI 970A SLI Krait Edition (MS-7693) (Revision 5.0, BIOS/UEFI Version: 25.4)<br />
* '''Host GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''Guest GPU''': ASUS GeForce GTX 480 1536MB<br />
* '''RAM''': 2x8GB Kingston HyperX Fury White DDR3 1866MHz<br />
* '''Storage''': 2x250GB Samsung EVO (MZ-75E250) set up in LVM striped mode (with mdadm), 2x1TB WD Blue (WDC_WD10SPCX) for storage. 250GB LVM volume as writeback cache for HDD.<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Linux 4.9.0-3 (No ACS)<br />
* '''Host OS''': Debian Stretch<br />
* '''Guest OS''': Windows 10 Home (10_1703_N, International Edition)<br />
* Using '''libvirt QEMU/KVM with OVMF''': See [https://github.com/Skeen/libvirt-gpu-passthrough Github]<br />
<br />
Issues you have encountered:<br />
<br />
* Identifical GPUs; solved [[PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs|using this section on the wiki]], but with the script from the [[Talk:PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs_-_did_not_work_for_me.|corresponding discussion page]]. Several adaptations for Debian were required too, but are not applicable for this forum.<br />
* "Error 43: Driver failed to load";<br />
** Spoofing vendor_id caused Windows to crash during boot-up.<br />
** Linux VMs complain unable to find GPU from Grub2, and booted into blind-mode, but would still pick up the graphics card during the boot process, and would remain functional until VM reboot.<br />
** Vendor_id spoofing turned out to work after solving the real problem (Missing UEFI compatability in VBIOS).<br />
* Missing UEFI (OVMF) compatability in VBIOS;<br />
** Requested a GOP/UEFI compatible VBIOS upgrade from ASUS, but ASUS could neither understand the request, or provide the upgrade (The only thing supplied was standard support answers).<br />
** No compatible VBIOS was found at [https://www.techpowerup.com/vgabios/ TechPowerUp].<br />
** Finally solved by manually hacking GOP/UEFI support into the ROM, using [http://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html GOPupd]. Current rom was dumped within a Windows 10 VM using GPU-Z, then modified using GOPupd, pulled to Linux, and provided using the rom file parameter in the VM XML file.<br />
* VM only uses one core (even with mode=host-passthrough): solved [[PCI_passthrough_via_OVMF#VM_only_uses_one_core|using this section on the wiki]].<br />
<br />
Quirks:<br />
<br />
* The GPU that is being passed through, [[PCI_passthrough_via_OVMF#Passing_through_a_device_that_does_not_support_resetting|does not support resetting]], and thus doing a hard-reboot / shutdown of the VM locks the GPU.<br />
** The VM cannot be started again unless the Host machine is rebooted.<br />
*** When doing a clean reboot / shutdown, allows the VM to start up as expected without reboot..<br />
** Removing and rescanning the PCI device, does not change anything.<br />
** No further attempts at powercycling the GPU from the host has been done (Yet).<br />
* [[PCI_passthrough_via_OVMF#Passing_VM_audio_to_host_via_PulseAudio|Passing VM audio to host via PulseAudio]] results in heavy crackling.<br />
** Using [[PCI_passthrough_via_OVMF#Slowed_down_audio_pumped_through_HDMI_on_the_video_card|Message-Signaled Interrupts]] have not been attempted (Yet).<br />
<br />
=== droserasprout poor man's setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i3-6100<br />
* '''Motherboard''': ASRock H110M2 D3 (BIOS version 0603)<br />
* '''Host GPU''': Intel HD 530<br />
* '''Guest GPU''': Sapphire Radeon R7 360<br />
* '''RAM''': Apacer 8Gb 75.C93DE.G040C, Kingston 4Gb 99U5401-011.A00LF<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-lts 4.9.67-1 (vanilla)<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro 1709 (build 16299.98)<br />
* Using '''libvirt/QEMU''': See my configs and IOMMU groups on [https://github.com/droserasprout/win10-vfio-configs Github]<br />
* HDD partition is passed to the VM as a raw virtio device.<br />
* HD Audio is passed too. Works fine with both playing and recording, no latency issues or glitches. After VM is powered off host audio works fine too.<br />
* Guest's latency is slightly better when CPU cores are isolated for VM.<br />
* i2c-dev module added to bypass 'EDID signature' error when switching HDMI. Without it I had to switch video output before starting VM for some reason.<br />
* intremap=no_x2apic_optout kernel option added to bypass motherboard firmware falsely reporting x2APIC method is not supported. Seems to have a strong influence on the guest's latency.<br />
* Overall performance is pretty close to the native OS setup.<br />
<br />
=== prauat: 2xIntel(R) Xeon(R) CPU E5-2609 v4, 2xGigabyte GeForce GTX 1060 6GB G1 Gaming, Intel S2600CWTR ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''':2xIntel(R) Xeon(R) CPU E5-2609 v4 <br />
* '''Motherboard''': Intel S2600CWTR(Revision ???, BIOS/UEFI Version: SE5C610.86B.01.01.0022.062820171903)<br />
* '''GPU''': 2xGigabyte GeForce GTX 1060 6GB G1 Gaming [GeForce GTX 1060 6GB] (rev a1)<br />
* '''RAM''': Samsung M393A2G40EB1-CPB 2133 MHz 64GB (4x16GB)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Linux 4.14.15-1-ARCH #1 SMP PREEMPT<br />
* Using '''libvirt/QEMU''': https://github.com/prauat/passvm/blob/master/generic.xml<br />
* Most important:<br />
* When using nvidia driver hide virtualization to guest <kvm><hidden state='on'/></kvm><br />
* Configuration works with Arch Linux guest os, still work in progress.<br />
<br />
=== Dinkonin's virtual gaming/work setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i7-7700K CPU @ 4.60GHz<br />
* '''Motherboard''': MSI Z270 GAMING PRO CARBON (MS-7A63) BIOS Version: 1.80<br />
* '''GPU''': 1x Gigabyte GeForce GTX 1050 2GB (host), 1x MSI GeForce 1080 AERO 8GB(guest)<br />
* '''RAM''': 32GB DDR4<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version linux 4.15.2-2-ARCH.<br />
* Using '''libvirt/QEMU (patched from AUR) with OVMF'''<br />
* Installed qemu-patched from AUR because of crackling/delayed sound with pulseaduio (still hear ocasional pops/clicks while gaming.<br />
* Patched video bios with https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher, because of error: <br />
vfio-pci 0000:01:00.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff<br />
* Single monitor setup, implemented full software KVM(for host and guest) described here: https://rokups.github.io/#!pages/full-software-kvm-switch.md<br />
<br />
=== pauledd's unexeptional setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7 6700K<br />
* '''Motherboard''': Gigabyte GA-Z170N-WIFI Retail (Revision 1.0 , BIOS/UEFI Version: F20)<br />
* '''GPU''': 8GB Palit GeForce GTX 1070 Dual Aktiv PCIe 3.0 x16 (Retail)<br />
* '''RAM''': 16GB G.Skill RipJaws V DDR4-3200 DIMM CL16 Dual Kit<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.15.2-gentoo<br />
* Using '''libvirt/QEMU''': libvirt-4.0.0, qemu-2.11.1, https://github.com/pauledd/GPU-Passthrough/blob/master/win10-2.xml , using vfio kernel module<br />
* Had to dump VBIOS in at the host while GPU was normally attached (and drivers loaded) (see https://stackoverflow.com/a/42441234), had to set CPU settings manually according to my cpu (host-passthrough, sockets 1, cores: 4, threads: 2 ) or some games will regularly crash, see my xml how to insert vbios, still have audio clicking/lag with pulseaudio but thats ok for me, no further patching etc.. works out of the box without any issues.<br />
* 3DMark Results Time Spy Graphic Score: Native Windows 10: 5564 , GPU-Passthrough: 5541<br />
<br />
=== hkk's Windows gaming machine (6700K, 1070, 16GB) ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i7-6700K 4.5GHz<br />
* '''Motherboard''': AsRock Fatality Gaming K6 Z170 (rev. 1.05)<br />
* '''Host GPU''': Intel GPU HD530 with 1GB shared memory<br />
* '''Guest GPU''': Gigabyte GeForce GTX1070 G1 Gaming 8GB<br />
* '''RAM''': 16GB G.Skill RipjawsV @ 3333 MHz CL14-15-15-31-2T [DDR4]<br />
<br />
Configuration:<br />
<br />
* '''Host Kernel''': Kernel version Linux 4.15.7-1-vfio (with ACS patch included).<br />
* Using '''libvirt QEMU/KVM with OVMF'''<br />
* '''Host OS''': Arch Linux<br />
* '''Guest OS''': Windows 10 Pro<br />
* 128GB Intel 600p SSD splited into 3 partitions: 512MB for EFI, 30GB for / in Btrfs and other gigs for Windows 10 installed straight on SSD.<br />
* Two more HDDs for Windows. 1TB and 650GB<br />
* Passed specific devices like X360 and some of single USB ports.<br />
* One NIC behind NAT on VM machine. <br />
* VM gets dedicated 8GB RAM via static hugepages.<br />
* CPU pinning increased performance considerably and machine gets 4/4 cores of my 4/8 CPU<br />
* Windows boots on second screen with simple script which shutting down display with xrandr.<br />
* Using Synergy to share mouse and keyboard between systems.<br />
* '''Quirks''':<br />
* Synergy is not perfect and will not entirely work in some games.<br />
* No boot screen. Display is turning on only when Windows is up and ready to go.<br />
<br />
=== sitilge's treachery ===<br />
<br />
Full info: https://git.sitilge.id.lv/sitilge/dotfiles<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel Core i5 6600K<br />
* '''Motherboard''': Asus Z170i<br />
* '''GPU''': Gigabyte Radeon RX460 OC 2GB<br />
* '''Storage''': Samsung 850 EVO 500GB<br />
* '''RAM''': Corsair 16GB DDR4<br />
* '''Mouse, Keyboard''': Logitech M90, Vortex Pok3r<br />
<br />
Host Configuration:<br />
<br />
* '''Kernel''': linux-vfio<br />
* '''Packages''': qemu-git, virtio-win, ovmf<br />
<br />
Guest Configuration:<br />
<br />
* '''OS''': Windows 10 Pro<br />
* '''CPU''': host<br />
* '''Motherboard''': host<br />
* '''GPU''': passthrough<br />
* '''Storage''': 64GB<br />
* '''RAM''': 8GB<br />
* '''Mouse, Keyboard''': passthrough<br />
<br />
Notes:<br />
<br />
* You can easy simlink the config files using {{ic|stow -t / boot mkinitcpio}} and then {{ic|mkinitcpio -p linux-vfio}}.<br />
* {{ic|-smp cores&#61;4}} - guest might utilize only one core otherwise.<br />
* {{ic|-soundhw ac97}} - I'm passing mobo audio thus ac97. Download, unzip and install the Realtek AC97 drivers within a guest.<br />
* Use virtio drivers for both block devices and network. For example, the ping went down from 250 to 50.<br />
* Mouse and keyboard passthrough solved the terrible lag problem which was present in emulation mode.<br />
* Make sure virtualization is supported and enabled in your firmware (UEFI). The option was hidden in a submenu in my case.<br />
* As trivial as it sounds, check your cables.<br />
* Be patient - it took more than 10 minutes for the guest to recognize the GPU.<br />
<br />
=== chestm007's hackery ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 7 1800x<br />
* '''Motherboard''': Asus ROG Crosshair VI (Revision 1, BIOS/UEFI Version: 3502)<br />
* '''GPU''': Asus ROG RX480oc 8GB<br />
* '''RAM''': 32gb Ripjaws 2400mhz<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.16.12-1-ARCH.<br />
* Using '''libvirt/QEMU''': libvirtd (libvirt) 4.3.0, QEMU emulator version 2.12.0, <br />
<br />
Notes: <br />
<br />
* using ic6 audio - works fine for me.<br />
* have a working looking-glass setup, however cant get spice to pass through keyboard and mouse, currently using a mixture of synergy and a dedicated screen as a workaround<br />
<br />
=== Eduxstad's Infidelity ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 2600X @ 3.7 GHZ <br />
* '''Motherboard''': ASUS PRIME B350-PLUS(BIOS/UEFI Version: 4011)<br />
* '''GPU1 (Guest)''': MSI 390 8GB @ Stock<br />
* '''GPU2 (Host)''': XFX 550 4GB @ Stock<br />
* '''RAM''': 2 x 8GB (16GB) @ 3000 HZ<br />
* '''Guest OS''': Windows 8.1 Embedded Pro<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': 4.17.3-1-ARCH (vanilla).<br />
* Using '''libvirt/QEMU''': libvirt/virt-manager (https://github.com/eduxstad/vfio-config).<br />
* Look in the repository for complete documentation of extra steps taken<br />
* Overview: VM managed using virt-manager, using looking glass for primary io and built in spice display server as backup. Passing vm audio back to pulseaudio. Using hugepages for RAM. SCSI Drivers installed for hardware drive support.<br />
<br />
=== Pi's vr-vm ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8700k @ 4.8 GHz<br />
* '''Motherboard''': MSI Gaming Pro Carbon (BIOS/UEFI Version: A.40/5.12)<br />
* '''GPU''': Palit RTX 2080 Ti<br />
* '''RAM''': 4x8GB G.Skill DDR4 @ 3000 MHz<br />
<br />
Configuration:<br />
<br />
* Kernel: latest mainline (rc if available)<br />
** custom built with ZFS, WireGuard<br />
** ''CONFIG_PREEMPT_VOLUNTARY=y'' to work around QEMU bug with long guest boot times<br />
* Startup scripts/additional info: https://github.com/PiMaker/Win10-VFIO<br />
* Issues encountered:<br />
** PUBG would not launch at all<br />
*** Solution: Enable the HyperV clock with <timer name='hypervclock' present='yes'/> and disable hpet with <timer name='hpet' present='no'/><br />
** VR would start to stutter badly after about 20-30 minutes of playtime (this one took me about 2 weeks to finally figure out :-)<br />
*** Solution:<br />
**** Enable invariant tsc passthrough with <feature policy='require' name='invtsc'/> (required even if using host-passthrough!)<br />
**** Enable MSI for the GPU (using tool from [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ here])<br />
**** Enable vAPIC and synic in the HyperV configuration<br />
**** Manually move all IRQs to host cores using qemu_fifo.sh script from my GitHub repo above<br />
* Overview: SteamVR-capable gaming and workstation rig, passing through NVIDIA GPU and onboard USB-controller (leaving an additional ASMedia USB port to the host). 22 GB hugepages memory, 10 of 12 cores (with SMT) passed through. Audio working via Scream (https://github.com/duncanthrax/scream) - with IVSHMEM, surprisingly low latency and no stutters.<br />
<br />
=== coghex's gaming box ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7-8086k @ 5.0 GHZ (8086k is just a binned 8700k)<br />
* '''Motherboard''': GIGABYTE Z370 AORUS Gaming 7 rev1.0 (BIOS/UEFI Version: F15a)<br />
* '''GPU''': GIGABYTE GV-N108TAORUSX WB-11GD AORUS GeForce GTX 1080 Ti Waterforce WB Xtreme Edition 11G @ ~2Ghz<br />
* '''RAM''': 4 x 8GB (32GB) Corsair Dominator Platinum @ 3600 HZ (XMP)<br />
Configuration:<br />
<br />
* '''Kernel''': linux-zen-5.5.8.zen1-1<br />
* '''Modules''': raid0 raid1 md_mod ext4 vfat ahci vfio_pci vfio vfio_iommu_type1 vfio_virqfd usbhid it87 (aur version is unmaintained and the support for the ITE8686E chip on this board is limited, replace it87 source with that which is found [https://github.com/andreychernyshev/it87-8613E here] for more comprehensive support)<br />
* '''Virsh''': virsh-5.10.0<br />
* '''Qemu''': qemu-system-x86_64-4.2.0 machine='pc-i440fx-4.2'<br />
* '''Performance Services''': [[Improving_performance#irqbalance|irqbalance-1.6.0]], [[Improving_performance#Ananicy|ananicy-git-2.1.0.r22]], [[CPU_frequency_scaling#cpupower|cpupower5.5-1]]<br />
* EDIT(2020): much has changed since this setup was posted years ago and a custom kernel is no longer needed on this hardware, everything works perfectly...<br />
* scripts, libvirt XML, and personal configs can be found here: https://github.com/coghex/hoest<br />
* host boot options: intel_iommu=on iommu=pt rd.driver.pre=vfio-pci acpi_enforce_resources=lax<br />
* systemd modprobe.d options: kvm ignore_msrs=1 (avoids critical bugs), kvm report_ignored_msrs=N (cleans up journal logs)<br />
* libvirt features: acpi, apic, kvm hidden state='on', vmport state='off'<br />
* guest hyper-v options: hv-relaxed, hv-vapic, hv-spinlocks (retries='8191'), hv-vpindex, hv-runtime, hv-synic, hv-stimer, hv-stimer-direct, hv-reset, hv-vendor_id (value='1234567890ab'), hv-frequencies, hv-reenlightenment, hv-tlbflush, hv-ipi, (hv-evmcs and hv-no-nonarch-coresharing seemingly do not work yet in virsh)<br />
* make sure to use the multifunction field for the GPU's hdmi audio controller and set them to the same slot, otherwise the audio interrupts will hang. Someone should probably add that to the guide...<br />
* im running the clock at 100Hz, the people running it at 1000 with the zen or ck kernel should know that the MuQSS scheduler works the same regardless of this speed and 1000 will just add more useless interrupts.<br />
* cpu pinning works best for single VM performance, default host-passthrough works best for multiple running VMs.<br />
* on windows, the [https://github.com/CHEF-KOCH/MSI-utility MSI_util_v2] gets used every update to reset MSI interrupts on the GPU.<br />
Hardware Specific:<br />
<br />
* '''Fully-Functional Passthrough Devices''': this motherboard has many PCI slots, all of these devices have been working flawlessly with little setup for years now:<br />
** Inatek USB Card: KT5001 [https://www.amazon.com/Inateck-Express-15-Pin-Connector-KT5001/dp/B00FPIMJEW]<br />
** Creative Sound Card: 70SB155000001 [https://www.amazon.com/Creative-Labs-70SB155000001-Blaster-PCI-Express/dp/B01LYT7U99]<br />
** EDUP WiFi Card: AC9636GS (must use virtio usb passthrough for bluetooth functionality) [https://www.amazon.com/EDUP-3000Mbps-802-11AX-Bluetooth-EP-AC9636GS/dp/B082F5D4SM]<br />
** Intel Optane SSD: SSDPED1D480GASX [https://www.amazon.com/Intel-Optane-900P-480GB-XPoint/dp/B0772T4BVZ]<br />
** Zotac GeForce GT 710: ZT-71304-20L (this one does not seem to be available on amazon anymore, a shame since its one of the few high performance PCIEx1 cards...) [https://www.amazon.com/ZOTAC-GeForce-Profile-Graphic-ZT-71304-20L/dp/B01E9Z2D60]<br />
* none of the proprietary gigabyte software works, in fact, it blue screens windows and installs itself as a startup program, forever locking you out.<br />
* if anyone else uses this exact motherboard, there are two internal USB IOMMU groups, even with the ACS patch. one will include the usb labeled "USB 3.1", and the other will include all the other USBs. this means if you want more than just a keyboard and mouse, you will need either a usb hub to plug into the 3.1 slot and passthrough, or a PCIE USB bus.<br />
* the two ethernet ports are in different IOMMU groups, making this a perfect motherboard for vfio.<br />
* the ACS patch is needed on this motherboard if you want to use two graphics cards at once in seperate IOMMU groups. this sets the main GPU to PCIEx8 instead of x16.<br />
<br />
=== Roobre's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Intel(R) Core(TM) i5-6600K CPU @ 3.50GHz (OC'ed to 4.50)<br />
* '''Motherboard''': ASUS ROG MAXIMUS VIII GENE, v3801<br />
* '''GPU''': EVGA GTX 1080Ti<br />
* '''RAM''': 32GB DDR4 2400 (2x Ballistix)<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Latest -ARCH or -zen (4.17.10-1-zen at the time of writing)<br />
* Using '''libvirt/QEMU''': libvirt 4.5.0-1, qemu 2.12.0-2. Config: https://gist.github.com/roobre/d2d20cc638c5030f360b500000da0f88{{Dead link|2020|02|25}}<br />
* '''ZFS''' volumes passed as raw devices for hard drives.<br />
* '''VirtIO all the things!''' Download drivers from https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/<br />
<br />
Issues: <br />
<br />
* Pulseaudio never worked good (too much crackling), so I ended up passing-through an USB 3.1 PCI controller and connecting an USB audio card to it. That card is then connected to one of my MoBo's inputs, and echoed using pulseaudio's `loopback` module.<br />
<br />
* Synergy works really great. On some games (ones who take control of the mouse pointer, e.g. first-person), you need to lock the mouse cursor to the VM window to avoid issues (camera moving too fast).<br />
<br />
* Do not forget to add the needed snippet for the nvidia driver to run ([[PCI passthrough via OVMF#"Error 43: Driver failed to load" on Nvidia GPUs passed to Windows VMs]])<br />
<br />
=== laenco's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': Ryzen 9 3950X @ 4.15Ghz all-cores via PBO<br />
* '''Motherboard''': Asus ROG STRIX X470-F GAMING (BIOS/UEFI Version: 5406)<br />
* '''GPU1 (Guest)''': Palit GeForce GTX 1080 8GB @ Stock<br />
* '''GPU2 (Host)''': MSI RX 570 8GB @ Stock<br />
* '''RAM''': 4 x 16GB (64GB) @ 3333 MHz<br />
<br />
Configuration:<br />
<br />
* '''Guest OS''': Windows 10 Pro<br />
* '''Kernel''': 5.4.13-arch1-1-gc (-ck is also good). No ACS patch.<br />
* Using vanilla '''QEMU 4.2.0'''<br />
* AMD Ryzen currently (2020.01.20) got bugged with smp threads option - VM stuck on start.<br />
* Got classic Nvidia error 43 - classically fixed. But also added some cpu flags which are set automatically with kvm=on found here https://github.com/qemu/qemu/blob/master/target/i386/cpu.c#L4008<br />
* As pure qemu have no option to pin cpu cores and self threads - using python script "cpu_affinity" - credits to https://github.com/zegelin/qemu-affinity/ and also a copy in my repo. Requires debug-threads=on<br />
* Using dynamically allocated hugepages 2Mb<br />
* Hardly using VirtIO<br />
* Using hardware usb switch like Aten US224-AT and hdmi switch "many-to-one", which allow me to have one monitor, mouse, keyboard and some usb devices, and switch them by button between host and guest.<br />
* Repo with current major system config and script for VM could be found here https://github.com/laenco/vfio-config<br />
<br />
=== Poncho's VFIO setup ===<br />
<br />
'''Hardware:'''<br />
<br />
* '''CPU''': Ryzen 7 2700x @ stock (PBO)<br />
* '''Motherboard''': MSI B450-A PRO MAX (BIOS/UEFI version: 7B86vM5)<br />
* '''GPU1 (Guest)''': MSI GeForce GTX 1660 Ti Gaming X 6GB @ Stock<br />
* '''GPU2 (Host)''': AsRock RX 570 8GB @ Stock<br />
* '''RAM''': 2 x 16GB @ (currently) 2666MHz<br />
<br />
'''Configuration:'''<br />
<br />
* '''Guest OS''': Windows 10 Home<br />
* '''Kernel''': 5.4.17-1-MANJARO vanilla, no ACS patch<br />
* '''libvirt 5.10.0/QEMU 4.2.0''': [https://gist.github.com/jp1995/7427b00eae14aba91a6ee2ab0d17df0a/ win10.xml gist]<br />
<br />
'''Issues I have encountered:'''<br />
<br />
The main issue that plagued me for a while was stuttering / heavy performance loss while simultaneously running processes (read 30 firefox tabs and a twitch stream) on the host. I also had crashes. The crashes were occurring more often in more demanding games, and less often when the host was as idle as possible. I finally solved this by changing my RAM speed from 3466MHz to 2666MHz. I have had no crashes for 2 days of gaming and the performance loss when using the host is also less significant. I'll try slowly bumping the RAM speed back up step by step to find the point of instability and I'll edit this once I've found it.<br />
<br />
'''Describing setup loosely:''' <br />
<br />
* On the hardware side, my 620 Watt PSU is perfectly adequate, despite some early concerns. <br />
* 16 PCI lanes for the Guest card, 4 for the Host card. 8+8 is also a solution but I haven't had the need to try this.<br />
* Regarding the VM setup, I pinned and isolated 12 logical processors, leaving 4 to the host. The isolation was achieved using [https://rokups.github.io/#!pages/gaming-vm-performance.md/ these scripts.] I needed the git version of cpuset for it to work. The pinning alone didn't change performance at all.<br />
* Audio passthrough is done through the usual pulseaudio solution, I have no demonic interference, works almost perfectly. I have to plug my headset directly into the VM when I want my mic to not sound garbage. ICH9.<br />
* I did try enabling MSI on the GPU in an attempt to fix the crashes described above, but all I got was a small but significant reduction in performance. <br />
* Regarding input, I got a bit lucky. My motherboard has two USB 3 ports all alone in a single IOMMU group. I got a 4 port USB switch and the only complaint I have with it is that sometimes it doesn't pick up my mouse when switching back to the host<br />
* No trouble at all getting the NVIDIA gpu to run in a VM, used the general solution in the wiki, including <kvm><hidden state='on'/></kvm><br />
* As for storage, I just gave the VM a whole raw SATA SSD. Benchmarking shows about a 50% performance drop, but I haven't really noticed significantly longer loading times in games. In the future I might try reinstalling windows on a virtual image for cloning purposes and use the SSD as a game drive.<br />
* All in all, there is about a 10% performance loss in CPU intensive games, compared to bare metal. This is acceptable and I'm pretty happy with the system :)<br />
<br />
=== zane's not working box ===<br />
<br />
Hardware:<br />
<br />
* '''MacBook Pro 11,x''' (2014 Model)<br />
* '''CPU''': Intel Core i7-4770HQ<br />
* '''Motherboard''': Apple<br />
* '''GPU''': Iris Pro 5200 for host, GTX 1660 eGPU over Thunderbolt 2 for guest<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-vfio from aur 5.5.8<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* '''ovmf''': 1:r26976.bd85bf54c2<br />
* '''libvirt/QEMU''': [https://gist.github.com/xzn/ef338049c91d21e9c1900982b21d9d32 libvirt setup]; [https://gist.github.com/xzn/06760e0e7df6ca325d0f05979aeff3bd qemu setup]<br />
<br />
Description:<br />
* The qemu script include lines for setting up device mapped file for raw disk access. 3D Performace is about 40% to 80% native depending on the application, with periodic lag spike/stutter.<br />
<br />
Issues:<br />
* Use [https://github.com/0xbb/apple_set_os.efi apple_set_os.efi] or {{ic|spoof_osx_version}} with [https://www.rodsbooks.com/refind/configfile.html refind] to avoid black screen on start. This prevents Apple firmware from shutting down host iGPU when booting Linux/Windows.<br />
* CPU pinning for guest is mandatory as it removes majority of stutters. After that isolate host CPU cores and pin emulator/IO threads as well. [https://github.com/PiMaker/Win10-VFIO/blob/master/qemu_fifo.sh Pi's script] for pinning IRQ handlers also helps. Hugepages for memory helps.<br />
* Kernel parameters: {{ic|1=intel_iommu=on iommu=pt pcie_acs_override=downstream pci=realloc vfio-pci.ids=10de:2184,10de:1aeb,10de:1aec,10de:1aed,8086:0d01,8086:156d,8086:156c isolcpus=0-5 nohz_full=0-5 rcu_nocbs=0-5 default_hugepagesz=1G hugepagesz=1G hugepages=12 mitigations=off pcie_aspm=off module_blacklist=nvidia audit=0 loglevel=3 quiet}}. Everything starting with {{ic|1=mitigations=off}} are optional. {{ic|1=pci=realloc}} is mandatory or you will get {{ic|NVRM: This PCI I/O region assigned to your NVIDIA device is invalid: NVRM: BAR1 is 0M @ 0x0 (PCI:0000:0a:00.0)}} error in dmesg and Error 43 for the Nvidia driver in guest.<br />
* Add {{ic|vfio_pci vfio vfio_iommu_type1 vfio_virqfd}} to your {{ic|mkinitcpio.conf}} as normal. Add {{ic|1=options kvm ignore_msrs=1}} and {{ic|1=options kvm report_ignored_msrs=N}} to your {{ic|/etc/modprobe.d/kvm.conf}} as well.<br />
* For me ACSO patch is mandatory, available from linux-vfio aur.<br />
* Enabling MSI for guest GPU seemingly helps. Using {{ic|ioh3420}} device and passthrough GPU on top of that DOES NOT seem to help, while making PulseAudio output cracks badly. Setting {{ic|1=mixing-engine=off}} for PulseAudio also makes it cracks badly so consider USB soundcard if needed. (I personally use the sound out on my monitor from guest). While I'm not sure what this option does, setting {{ic|in.buffer-length}} on PulseAudio audiodev reduces cracks.<br />
<br />
=== Muata's VFIO setup ===<br />
<br />
Hardware:<br />
<br />
* '''CPU''': i7 4790<br />
* '''Motherboard''': MSI B85M-G43 BIOS/UEFI Version: V3.9 (03/30/2015)<br />
* '''GPU''': NVIDIA GeForce GTX 1060 6GB (MSI Gaming+)<br />
* '''RAM''': 16GB<br />
<br />
Configuration:<br />
<br />
* '''Kernel''': linux-ck 5.5.11-1<br />
* Using '''libvirt/QEMU''': [https://github.com/Muata/VFIO VFIO setup];<br />
* '''qemu''': 4.2.0<br />
* '''libvirt''': 5.10.0 <br />
* No issues at moment of writing this.<br />
<br />
> I had some issues with the network, for example, I couldn't connect to Activision games servers (CoD: MW, Overwatch) but I've changed firewall settings from public to private and everything is good for now.<br />
<br />
> For the first time, I had windows on .raw image and disk was throttling a lot, I've set up raid0 on my 2 HDD's, create 3 partitions with LVM - 120GB for windows, 700GB for data(games), 700gb for Linux data and passthrough two of partition as Virtio-BLK. [https://wiki.archlinux.org/index.php/Software_RAID_and_LVM RAID&LVM]<br />
<br />
> Audio passthrough is done through the usual PulseAudio solution, works nicely.<br />
<br />
> For some people who, maybe looking how to passthrough GPU - because it's not obvious when you doing it for the first time and it's not on the wiki though, so when you pass a correct group of vfio-pci.ids then you need to add in (easiest way) a Virtual Machine Manager - Add hardware - PCI Host Device - You graphic card (for me it was 0000:01:00:0 NVIDIA Corporation GP106 [GeForce GTX 1060 6GB]).<br />
<br />
<br />
== Adding your own setup ==<br />
<br />
Add a new section with your nickname, CPU, motherboard and GPU models, then copy and paste this template to your section:<br />
<br />
{{bc|<nowiki><br />
Hardware:<br />
<br />
* '''CPU''': <br />
* '''Motherboard''': (Revision , BIOS/UEFI Version: )<br />
* '''GPU''': <br />
* '''RAM''': <br />
<br />
Configuration:<br />
<br />
* '''Kernel''': Kernel version (vanilla/CK/Zen/ACS-patched or not).<br />
* Using '''libvirt/QEMU''': link to domain XMLs/scripts/notes (Git repo preferred).<br />
* Issues you have encountered, special steps taken to make something work a bit better, etc.<br />
* Describe your setup loosely here, so that when other wiki users are looking for something, they can easily skim through available setups.<br />
</nowiki>}}<br />
<br />
Replace proper sections with your own data. Make sure to provide the exact motherboard model, revision (if possible - should be on both the motherboard itself and the box it came in) and BIOS/UEFI version you are using. Describe your exact software setup and add a link to your configuration files. (GitHub, GitLab, BitBucket, etc can host a public repository which you may update once in a while, but uploading them to pastebins is fine, too. '''Do not''' post the entire config file contents here.)</div>Muata