Talk:PCI passthrough via OVMF

From ArchWiki
Jump to: navigation, search

Supported Hardware Table

I think it would be a good idea to add a table of working / not working hardware like this:

Motherboard CPU Host GPU Client GPU uname -r ACS Override needed Notes
ASUS Z170-A Intel i7-6700K Integrated ASUS Strix GTX 980 Ti 4.3.3-3-ARCH no No problems
Asrock Fatal1ty X370 Gaming X Ryzen 5 1600X Nvidia GTX 480 Nvidia GTX 960 4.14.14-1-ck no Couldn't get it to work with GTX 480 as guest GPU

--Phiresky (talk) 12:28, 29 January 2016 (UTC)

Well, there's: https://docs.google.com/spreadsheets/d/1LnGpTrXalwGVNy0PWJDURhyxa3sgqkGXmvNCIvIMenk/edit#gid=2 and it has (if I read it properly) 208 unique motherboards. I'm not sure if Wiki can handle such a long list. Annisar (talk)
Huh, looks like I missed that. Yeah, that list is probably too long. Kind of unclean / hard to read and can't edit it though. Phiresky (talk) 13:20, 29 January 2016 (UTC)

Page rewrite

I've fiddled a lot with vfio in the last few months and I've been thinking about restructuring this page based on the information I've gathered over time. Considering large chunks of the page date all the way back from wen it was first written, and that the structure of the page isn't as researched as what you'd find on the rest of the wiki, I think a restructuration could greatly improve the overall understability of the page. Here's the overall structure I had in mind :

Prerequisites
Setting up IOMMU
Enabling IOMMU
Ensuring that the groups are valid
Common mistakes/Gotchas
Isolating the GPU
Using vfio-pci (Reccomended, Linux 4.1+)
Using pci-stub (Legacy)
Common mistakes/Gotchas
Setting up QEMU-kvm
Using libvirt
Using the qemu command line
Troubleshooting
Error 43 in Windows on Nvidia cards
"Invalid ROM content" in dmesg when trying to claim the guest GPU
...
Extras
ACS override patch
Improving performance on the VM

I've already written a good chunk of what's left, but I'd like some feedback on the proposed structure and what's already there before I proceed. Victoire (talk) 20:55, 8 April 2016 (UTC)

Due to a lack of technical background, I can't really comment on completeness of your draft TOC compared to what the article covers, but it reads very structured. I had a look over the last edits you already committed to the article. Well done, I only want to give a couple of hints:
  1. Please take care when moving sections and editing, best split the move into its own commit. It is very difficult for someone else (or yourself later) to follow-up on what has been done (see ArchWiki:Contributing#Do not make complex edits at once).
  2. Point 1 is particularly important if you decide content is outdated and remove it during the restructuring. Usually we put Template:Deletion to the part first. Maybe you can do that as well. However, during a restructure the content might be in the way. You can also use a temporary section e.g. "Obsoleted sections" to move the sections to for the time being. If you delete parts, please do so only per subsection (edit button next to the header). This way it is easier to figure via history what went where.
  3. For the language, simple one: We avoid contractions (Help:Style#Language register)
  4. Looking at the existing article, there are some sections (e.g. PCI passthrough via OVMF#Complete example for QEMU .28CLI-based.29 without libvirtd) with very very long code blocks. Too long for good reading. Two alternatives for those (if you need them in the anticipated structure): Either move the long example code blocks to the end of the article (e.g. the Extras section) and crosslink them from above, quoting only the required excerpts. Or quote as well, but move the actual full code to a non-ad-spoiled gist (e.g. gist.github.com). If you move them outside the wiki, please do the removal/replacement with the gist link in one edit (same reason as for deletions, thanks).
  5. In the other talk items there are a few suggestions, e.g. #Article_Specificity and #Supported_Hardware_Table that might be useful to consider going forward.
That said, I hope your push to restructure and refresh the article gets input and help from other contributors to this article. --Indigo (talk) 18:49, 18 April 2016 (UTC)
Thanks for all of the contributions, please remember to keep the scope of the page inside it's original design: to achieve PCI passthrough via OVMF. I would like to have the long blocks of code removed/relocated especially those that do not necessarily explain the actual process of getting passthrough to work. Thanks!
—This unsigned comment is by Naruni (talk) 20:42, 24 April 2016‎. Please sign your posts with ~~~~!
I think it may have lost some focus because OVMF was not explicitly pointed to in the intro. As suggested in #Article_Specificity, it may be useful to split content unrelated to OVMF into a separate article, e.g. PCI passthrough and this article may stay single or become a subpage for it (e.g. PCI passthrough/OVMF), whatever is best to keep focus, while not duplicating instructions or loose contributions which are not directly OVMF related.
If you look at the above structure Victoire has worked out, does it contain sections you would consider out-of-scope for OVMF?
--Indigo (talk) 13:04, 26 April 2016 (UTC)
So I have reached the point where most of the structure is now in place and most of the work left is either related to adressing specific issues people are likely to encounter or rephrasing some parts to make them more readable, such as the part on setting up the guest OS, which could use some fleshing out.
I have also taken the liberty of adding a Performance Tuning section, which may or may not be out of place considering the scope of the article. I would appreciate some feedback on whether or this belongs here.
Also, both for the sake of readability and because I couldn't test the instructions myself, I removed most of the QEMU-related (without libvirt) instructions. While it seemed like a good decision at the time, I would like to get some feedback on this, as it is still a removal of potentially useful instructions (although some of them did seem dubtious).
Victoire (talk) 14:07, 9 May 2016 (UTC)
I was hoping others with topic experience chime in to feedback, perhaps that happens still. Arguably the whole article is about performance tuning, so I would see the section you added in scope. It is an interesting read even for someone like me, who has not used any of this. For the QEMU point I'm unable to give feedback. --Indigo (talk) 08:02, 30 June 2016 (UTC)
I agree that some of the QEMU scripts removed from the article were too detailed and still not explained well enough, but originally when I was setting up my passthrough system with the help of this article, I preferred the scripted way instead of libvirt and still do after a long good experience with the method. At the time the scripts in the article pointed me to the right direction. I'd really like to contribute to the article with smaller and simpler script examples with decent explanations for using every core parameter needed for the VM started with QEMU commands, so anyone with an opinion about this please comment before I start working on it. The scripted way is great for many use cases and I really think it needs to be addressed better on this article. Nd43 (talk) 07:28, 20 April 2017 (UTC)

Additionnal sections

In case I forget, I made a list of things I wanted to add to this article some time ago, I just haven't found the time to write those parts yet.

  • Performance tuning
    • Kernel config (Volountary preemption, 1000 Hz timer frequency, dynticks, halt_poll_ns, etc.)
    • Advanced CPUs features
      • 1GB hugepages
      • NUMA node assignment
      • Hardware virtual interrupt management (AVIC/APICv)

  • Special Procedures
    • Using identical guest and host GPUs
    • Passing the boot GPU (see here)
    • Bypassing the IOMMU groups (ACS override patch)
  • Additionnal devices
    • GPU soundcard (MSI interrupts)

As of now, I don't have the sort of hardware that would allow me to test these special cases, so it's a bit hard for me to justify writing those sections, but it might be interesting to add those someday.

Victoire (talk) 02:16, 5 August 2016 (UTC)

EDIT1 : Victoire (talk) 03:10, 9 August 2016 (UTC)

EDIT2 (Might need to revisit some of those now that Ryzen comes with most of these features) : Victoire (talk) 00:09, 7 August 2017 (UTC)

hotplug gpu

found something interresting for hotpluging the gpu without to restart x:

http://arseniyshestakov.com/2016/03/31/how-to-pass-gpu-to-vm-and-back-without-x-restart/ repo: https://gist.github.com/ArseniyShestakov/dc152d080c65ebaa6781

It removes the card from the graphics driver module (here radeon) and add's it to the vfio-module.

I'm currently testing this. —This unsigned comment is by Xtf (talk) 22:25, 12 August 2016‎. Please sign your posts with ~~~~!

Necessity of ovmf-git

Right now, the article recommends using a package from the AUR, ovmf-gitAUR. The only reason for that is because this version comes with the EFI var template files while the ovmf package from the Extra repos doesn't. I'm not entirely sure how complex those templates are and if there is a way to install OVMF properly without them, though, and it would definitely be best if we could do without the use of an AUR package. Victoire (talk) 17:40, 22 November 2016 (UTC)

I aggree, but has the current maintainer of ovmf been made aware of this issue yet?
--Lineks (talk) 17:33, 9 April 2017 (UTC)
I think whis was fixed on the latest update in march. See https://git.archlinux.org/svntogit/packages.git/commit/trunk?h=packages/ovmf&id=af39dbbc3037734379416016e55d41548a2c2968
Bug report closed as implemented: https://bugs.archlinux.org/task/47101
I'll test and confirm.
--WeissJT (talk) 23:15, 8 May 2017 (UTC)
Confirmed, ovmf works fine. ovmf-gitAUR is not needed anymore.
--WeissJT (talk) 23:17, 12 May 2017 (UTC)
Thanks for the confirmation, made the edit on the plain QEMU section too. Nd43 (talk) 07:30, 13 May 2017 (UTC)

Inaccurate hugepage advice

The static hugepage section says: "On a VM with a PCI passthrough, however, it is not possible to benefit from transparent huge pages, as IOMMU requires that the guest's memory be allocated and pinned as soon as the VM starts. It is therefore required to allocate huge pages statically in order to benefit from them."

This may have been true previously but in my experience (currently verified on an Ubuntu Trusty box with kernel 4.4) this is no longer accurate:

gpu-hypervisor:~$ uname -a
Linux rcgpudc1r54-07 4.4.0-59-generic #80~14.04.1-Ubuntu SMP Fri Jan 6 18:02:02 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
gpu-hypervisor:~$ ps auxww | grep qemu | grep -v qemu
libvirt+ 105915  297 95.3 325650548 251763980 ? SLl  Apr10 9725:30 /usr/bin/qemu-system-x86_64 -name instance-00000167 -S -machine pc-i440fx-xenial,accel=kvm,usb=off -cpu host -m 245760 -realtime mlock=off -smp 24,sockets=2,cores=12,threads=1 -object memory-backend-ram,id=ram-node0,size=128849018880,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-11,memdev=ram-node0 -object memory-backend-ram,id=ram-node1,size=128849018880,host-nodes=1,policy=bind -numa node,nodeid=1,cpus=12-23,memdev=ram-node1 -uuid 87681aae-2bc7-4b2e-b17b-f407cf23701e -smbios type=1,manufacturer=OpenStack Foundation,product=OpenStack Nova,version=12.0.4,serial=4c4c4544-0059-4710-8036-c3c04f483832,uuid=87681aae-2bc7-4b2e-b17b-f407cf23701e,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-instance-00000167/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=/var/lib/nova/instances/87681aae-2bc7-4b2e-b17b-f407cf23701e/disk,format=raw,if=none,id=drive-virtio-disk0,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -drive file=/var/lib/nova/instances/87681aae-2bc7-4b2e-b17b-f407cf23701e/disk.eph0,format=raw,if=none,id=drive-virtio-disk1,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:5e:21:1c,bus=pci.0,addr=0x3 -chardev file,id=charserial0,path=/var/lib/nova/instances/87681aae-2bc7-4b2e-b17b-f407cf23701e/console.log -device isa-serial,chardev=charserial0,id=serial0 -chardev pty,id=charserial1 -device isa-serial,chardev=charserial1,id=serial1 -device usb-tablet,id=input0 -vnc 0.0.0.0:0 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device vfio-pci,host=05:00.0,id=hostdev0,bus=pci.0,addr=0x6 -device vfio-pci,host=06:00.0,id=hostdev1,bus=pci.0,addr=0x7 -device vfio-pci,host=85:00.0,id=hostdev2,bus=pci.0,addr=0x8 -device vfio-pci,host=86:01.1,id=hostdev3,bus=pci.0,addr=0x9 -device vfio-pci,host=84:00.0,id=hostdev4,bus=pci.0,addr=0xa -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0xb -msg timestamp=on
gpu-hypervisor:~$ grep AnonHuge /proc/meminfo
AnonHugePages:  246108160 kB

I don't have an Arch system to verify this on, but I don't expect it to be distro specific. Any objection to removing this?

Have you tested memory performance inside the VM? As in, statically allocated hugepages vs transparent, with VFIO devices bound and actively used inside the VM? DragoonAethis (talk) 11:18, 13 April 2017 (UTC)

Using identical guest and host GPUs - did not work for me.

I had to create the script vfio-pci-override.sh in bin, not sbin, or it would not find the file. (and update the modprobe and mkinitcpio accordingly)

furthermore, I had to change the sys/devices path to search a directory deeper. The script /bin/vfio-pci-override.sh looks like this then:

#!/bin/sh
  
  for i in /sys/devices/pci*/*/*/boot_vga; do
          if [ $(cat "$i") -eq 0 ]; then
                  GPU="${i%/boot_vga}"
                  AUDIO="$(echo "$GPU" | sed -e "s/0$/1/")"
                  echo "vfio-pci" > "$GPU/driver_override"
                  if [ -d "$AUDIO" ]; then
                          echo "vfio-pci" > "$AUDIO/driver_override"
                  fi
          fi
  done
  
  modprobe -i vfio-pci


This situation is the only one that worked for me. Eggz (talk) 19:12, 14 April 2017 (UTC)

Passthrough via OVMF without libvirt

I don't have a graphic card with UEFI fw to test this but it should work (with the ovmf package and not ovmf-git), please test and add it to the page if it does work.

qemu-system-x86_64 \
    -enable-kvm \
    -m 8192 \
    -M q35 \
    -cpu host \
    -smp 4,sockets=1,cores=4,threads=1 \
    -vga none \
    -drive if=pflash,format=raw,readonly,file=/usr/share/ovmf/ovmf_code_x64.bin \
    -drive if=pflash,format=raw,file=/tmp/ovmf/qemu-vm-01/ovmf_vars_x64.bin \
    -device vfio-pci,host=05:00.0,addr=09.0,multifunction=on,x-vga=on \
    -device vfio-pci,host=05:00.1,addr=09.1 \
    -drive file=diskimg.qcow2,format=qcow2

Dhead (talk) 15:58, 8 May 2017 (UTC)

And you should copy the vars first
mkdir -p /tmp/ovmf/qemu_vm-01
cp /usr/share/ovmf/ovmf_vars_x64.bin /tmp/ovmf/qemu_vm-01/ovmf_vars_x64.bin
Dhead (talk) 16:08, 8 May 2017 (UTC)
Oh, indeed the official ovmf package seems to be updated after a long while to include the variable files, at least I was told they were missing from it before. I'll test this out and do a writeup about it as I'm actually writing a draft for adding the scripted method back to the artictle, check here: User:Nd43/Scripted QEMU setup without libvirt. It's still incomplete but already it's also getting pretty big and detailed, any comments welcomed. Also I'm wondering if it should be a subpage or not when I finally start merging it back to this actual article. Nd43 (talk) 08:19, 9 May 2017 (UTC)
Nice guide but I don't think the ArchWiki is the right place for it, it's too specific and too detailed to be incorporated into this article and adding it as a subpage will end with duplicated content which will never be in sync.
The starting point of this article is that the user (an Arch user, not a copy-paste monkey) already gone through the QEMU article so everything not directly related to pci passing like systemd service, audio, putting all in a script should not be added to this article. So instead we should just add the very basic command switches related to using OVMF.

I might be a little digressing, but I also think this page should be torn down and reworked. All the performance tuning should be in its own subpage of the main QEMU article. Using OVMF instead of the default SeaBIOS also should be in a subpage (of QEMU). So we will be left with just the PCI passthrough stuff here. Dhead (talk)
How about mentioning just the passthrough switches for QEMU here? The -device vfio-pci,host=... ones, even OVMF isn't *required* for the passthrough to work (although it is easier to work with). Other than that, passthrough can be thrown in for just about any QEMU VM, and it's up to the guest OS to handle these devices somehow. Other QEMU args are specific to that VM. DragoonAethis (talk) 10:09, 9 May 2017 (UTC)
I agree, just adding the related QEMU command switches should be more than enough. p.s. OVMF might be preferred for VGA PCI passthrough as at least on my system SeaBIOS seems to have some issues with USB input devices on boot/in bootloader (not an issue when the OS/kernel is running).Dhead (talk) 12:05, 9 May 2017 (UTC)
You're probably right, but I still think many details of my writeup should be easily available for Arch users looking just to deploy a QEMU VM with PCI passthrough. I also agree that the whole article needs a rewrite on some parts but I really don't want to see the actually relevant information get removed temporarily or buried behind too many internal links to other articles. I've already seen how the article has suffered during the few years and currently its structure is partly very unclear. Does anyone have additional suggestions for the information to be added or should I just start editing with small additions to see how they're received? Nd43 (talk) 18:44, 9 May 2017 (UTC)
Passing through the GPUs to a ready VM is just about adding those two switches and making sure you have some "extras" in if you're using a specific setup (GeForces requiring the spoofed vendor ID, etc). It might make more sense to let people configure their VM first (= installing OS under UEFI, installing VirtIO drivers, configuring their base environment etc) and THEN explain how to pass through the GPU to that ready VM - this might fit pretty well in this article.
Most of these copy-n-paste guides assume a very specific setup (memory, CPU cores, disks, etc), and there is value in those, since you can peek at complete solutions that are known to work under specific hardware combinations. However I'd rather create a separate VFIO "examples" page, where a brief description of that user's hardware, software (kernel + cmdline, additional VM configuration steps, etc) and the QEMU script/libvirt domain file would be posted. Exact specifics (additional scripts, configuration files for Synergy, PulseAudio and such) would have to be posted on GitHub/GitLab/somewhere else as to not clutter the wiki beyond reason (older examples would have to be removed or updated over time, since QEMU/libvirt break things once in a while). Does that sound fine? DragoonAethis (talk) 21:55, 10 May 2017 (UTC)
I like your thinking. I made an initial revision edit at PCI passthrough via OVMF#Plain QEMU without libvirt describing the bare minimum for achieving a practical QEMU setup with links to other relevant articles. I'm probably going to continue editing the section later as I want to pay attention to some details and terminology a bit more, but for now I think it will do for provoking people to make additions/changes to reach better guidance on the topic. So, comments and edits highly welcomed. I also removed the old links for the time being, even though I really liked Melvin's script examples, they indeed belong to the "See also" section or somewhere else totally. Feel free to relocate the links to a logical order. Nd43 (talk) 15:32, 13 May 2017 (UTC)
I've created the PCI_passthrough_via_OVMF/Examples page, feel free to contribute your working setups there. There's a template etc available to make adding new entries easier and more consistent. DragoonAethis (talk) 14:37, 19 May 2017 (UTC)

Moving the full tutorial elsewhere?

A number of people on this talk page have already mentioned how having a complete tutorial like this on the Arch Wiki makes some parts of the article somewhat redundant with other pages on the wiki, and that it would probably be better if the article were split into multiple parts and spread across a number of pages. At the same time, having a full tutorial like this here means it can't really cover other distros without breaking the contribution guidelines, which limits the article's reach somewhat.

However, I know a lot of people (myself included) have been convinced to try Arch after seeing a number of quality articles on the official wiki, and from what I've seen elsewhere, this very article has been this to some people. I've seen people on /r/vfio and elsewhere use this article as their primary reference for people who want to setup their machine this way. That's somewhat what I was aiming for when I started reworking this article to try and adapt AW's blog into something that's less intimidating to read, and it's great to see it has managed to evolve into what it is now thanks to all the contributions that have happened since then.

I'd like to know, according to more experienced Arch Wiki contributors than me, whether or not this article actually belongs here, or if the current page shouldn't be moved to the VFIO subreddit and the Arch page torn down and its content split on other QEMU-related pages.

Victoire (talk) 14:56, 10 July 2017 (UTC)

IMO this article is fine here - it's widely linked, kept mostly up-to-date, clearly explained and doesn't require two days of research to even attempt this setup. I would agree making an exception and allowing other distro-specific information to be posted here would be great, but moving the entire post to /r/vfio, where the wiki pages don't work on mobile and limit contributors to the subreddit's moderators or manually added people will limit the article's reach even more and cause it to go out-of-date in a few months. (If the wiki is open to all registered redditors, most more-popular wiki pages *will* be defaced.) DragoonAethis (talk) 12:24, 11 July 2017 (UTC)
Victoire, this is the Arch Wiki, users of other distros will be better served with full tutorials elsewhere, /r/VFIO has a wiki.
Regarding your choice of words, the page shouldn't be moved anywhere, copied somewhere else, yes, there you could just have a simple full tutorial from start to finish to set a host and guest QEMU machines with GPU passthrough, but the contents of this page should stay here so Arch Wiki users and maintainers could change it as needed, which I gather it's what you meant by "moved". In fact, from the little I've been seeing in /r/VFIO there are nice examples and tutorials that are buried and not being mentioned and linked in the /r/VFIO wiki.
Regards the needed updates to this page, first start, I would say the Performance tuning needs to go into the Tips and tricks in the QEMU page (which deserve its own subpage), the article name should be changed to PCI passthrough so it would reflect the actual content of the page which isn't UEFI specific.
Also, the complete examples page seems superfluous, why not have a wiki page in /r/VFIO with known compatible MB, or MB+GPU setups as all rest of the devices (recent CPU, RAM, storage) shouldn't matter, I wouldn't be surprised if the moderators would remove it.Dhead (talk) 14:05, 11 July 2017 (UTC)

Set firmware to UEFI

>"In the "Overview" section, set your firmware to "UEFI". If the option is grayed out, make sure that you have correctly specified the location of your firmware in /etc/libvirt/qemu.conf and restart libvirtd.service."

I see the Overview section, but I do not see "firmware" or "UEFI" anywhere on that section. There is nothing "grayed out". I only see:

  • Name
  • UUID
  • Status
  • Title
  • Description
  • Hypervisor
  • Architecture
  • Emulator
  • Chipset (i440FX or Q35 are the options)

This makes me think the information could be outdated. I am not able to proceed and I suspect this to be the cause. Henstepl (talk) 19:57, 3 August 2017 (UTC)

This can be set only during the VM creation - check "Customize installation" on the final creation wizard screen and the relevant option will show up. If you can't select this during the VM creation, install `ovmf-git` or `ovmf`. DragoonAethis (talk) 20:05, 3 August 2017 (UTC)
Yes, I am looking at exactly that, but there is no "Firmware" option at all. It just skips directly from "Emulator" to "Chipset". Nothing greyed out. Simply not there. Henstepl (talk) 23:07, 3 August 2017 (UTC)
Are you sure you've got ovmf or ovmf-git installed? If you've built it yourself, make sure /etc/libvirt/qemu.conf contains the proper OVMF binary path: nvram = [ "/usr/share/ovmf/x64/ovmf_x64.bin:/usr/share/ovmf/x64/ovmf_vars_x64.bin" ] - there should be a section like this in that file, but commented and possibly with different paths. Point it at your OVMF binaries (pacman -Ql ovmf or ovmf-git will show all files in that package). DragoonAethis (talk) 10:20, 4 August 2017 (UTC)
Yes, ovmf has been installed from the start. nvram = ["/usr/share/ovmf/ovmf_code_x64.bin:/usr/share/ovmf/ovmf_vars_x64.bin"], as contains the paths listed by pacman -Ql ovfm, has been in /etc/libvirt/qemu.conf. Thanks for your advice but it hasn't elucidated anything I've done wrong, and there still is no Firmware option. Henstepl (talk) 04:38, 5 August 2017 (UTC)
Okay, not sure why this option doesn't show up, but there's also a way to manually manipulate the domain XML file to use OVMF. Here are the relevant lines, once you've created the VM, open your terminal and enter sudo EDITOR=your-editor-like-nano-or-vim-or-somethign virsh edit your-domain-name and make the <os> section there contain the <loader> and <nvram> entries pointing to proper OVMF files. If you're missing the vars file, here it is from my VM. DragoonAethis (talk) 11:39, 5 August 2017 (UTC)
I was hoping that a solution to this would help me avoid the error message I have been getting when I try to start my VM:
Unable to complete install: 'internal error: process exited while connecting to monitor: 2017-08-03T03:20:54.793536Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0)

Could not access KVM kernel module: Permission denied failed to initialize KVM: Permission denied'

Traceback (most recent call last):

 File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in cb_wrapper
   callback(asyncjob, *args, **kwargs)
 File "/usr/share/virt-manager/virtManager/create.py", line 2288, in _do_async_install
   guest.start_install(meter=meter)
 File "/usr/share/virt-manager/virtinst/guest.py", line 477, in start_install
   doboot, transient)
 File "/usr/share/virt-manager/virtinst/guest.py", line 405, in _create_guest
   self.domain.create()
 File "/usr/lib/python2.7/site-packages/libvirt.py", line 1062, in create
   if ret == -1: raise libvirtError ('virDomainCreate() failed', dom=self)

libvirtError: internal error: process exited while connecting to monitor: 2017-08-03T03:20:54.793536Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/2 (label charserial0) Could not access KVM kernel module: Permission denied

failed to initialize KVM: Permission denied
Which prevents me from creating a VM with those settings... Welp. Maybe I will just reinstall everything from the start. Henstepl (talk) 05:48, 7 August 2017 (UTC)

I had the same issue, fixed it by forcing the kvm gid to 78, an update of systemd give kvm group a dynamic id while libvirtd expect to be 78. groupmod -g 78 kvm and a reboot fixed it (https://bbs.archlinux.org/viewtopic.php?id=228936) Makz (talk) 22:15, 9 September 2017 (UTC)

Boot GPU Passthrough

I managed to pass the boot gpu (the only one in my system) by:

  • setting the GRUB option GRUB_GFXPAYLOAD_LINUX=text (for example in /etc/default/grub)
  • and adding the kernel parameter earlymodules=vfio-pci (although I'm not sure if that is necessary at all)

That prevents the initramfs from initializing a graphical console on the boot gpu, allowing the VM to take over it directly after GRUB. However, you can't see your boot status and will have to type your LUKS password blind if you encrypted your root partition.

The Hardware I used: AMD Ryzen 7 1700 CPU, Mainboard: Biostar X370GTN, Passthrough GPU: Nvidia GeForce GTX 970

-- dasisdormax (talk) 11:33, 8 August 2017 (UTC)

I followed the steps in the guide and added video=efifb:off but then I saw messages like Invalid PCI ROM header signature: expecting 0xaa55, got 0xffff.

I resolved this by putting my main GPU in a secondary slot, putting a spare GPU in the primary slot, disabling all the vfio-pci stuff (i.e. letting nouveau load) and making a copy of the ROM from /sys/devices/pci0000:40/0000:40:01.3/0000:41:00.0/rom. It's worth noting that ROM images from places like TechPowerup don't work because they use a different format, the one used by nvflash.

Once I had the ROM image, I had to add <rom file='/path/to/rom'/> to my domain XML and all was well.

I used an AMD Threadripper 1920X, ASUS PRIME X399-A and MSI Gaming X GTX1080Ti.

-- bobobo1618 (talk) 15:43, 10 February 2018 (UTC+1)

Slow boot linked to the RAM amount

> Windows 10 VM with hugepages, 16Gb ram = 1m40 from the start to login screen, same VM reduced to 1Gb ram = 10 seconds from start to login screen (i7-6900K + 64Gb DDR4-2133 non-ECC) brand new installation

Same setup on a bi-Xeon + 64Gb DDR3-1600 ECC ram, 16Gb ram = 13 seconds, 1Gb = 13 seconds

While the VM is booting, all assigned CPU are working at 100%

Is the ECC ram help to boot faster ? Is it a OVMF related issue ? Maybe the virtual bios is checking the ram before starting the OS ?

While it's loading, i have a black screen, i don't think the GPU is handled by the VM at this moment.

I saw some users on forums having this issue and tried latest ovmf build but no fix so far.

Makz (talk) 09:10, 10 September 2017 (UTC)

Question about virtlogd and iommu=pt

Is enabling virtlogd.socket optional? If so, that should be stated.

Is iommu=pt only for AMD? It looks like that on this page https://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM

Their instructions says:

intel_iommu=on # Intel only

iommu=pt iommu=1 # AMD only

—This unsigned comment is by Limero (talk) 19:59, 16 January 2018‎. Please sign your posts with ~~~~!

virtlogd.socket must be enabled, libvirt will otherwise fail to start. iommu=pt is separate from the intel_iommu=on and amd_iommu=on - iommu=pt enables the IOMMU hardware (on any platform) only for the passthrough devices and lets all the other devices (not passed to the VMs) do whatever they want. The "normal" IOMMU mode is to isolate all devices and make sure all devices behave nicely (no writes out of their address space, DMA only to their assigned section of memory, etc), but this may cause problems on some older platforms. DragoonAethis (talk) 20:27, 16 January 2018 (UTC)

Slowed down audio pumped through HDMI on the video card

In this section it is mentioned that the "MSI_util" program does not work on windows 10 64-bit however in the page linked earlier[1] there is now a "MSI_util_v2" program available and this program did work, for me at least. I would love it if other people can test it and report back so I can update the page.

Victorheld (talk) 11:21, 5 June 2018 (UTC)

I used it successfully on Windows 8.1 64-bit too, +1 for replacing it with v2. DragoonAethis (talk) 12:02, 5 June 2018 (UTC)

UEFI (OVMF) Compatibility in VBIOS

Sorry if I am using this wrong and for just butting in as a new user, but I spent quite a lot of time getting a VBIOS card without UEFI working, and I feel like this guide could be clearer. I have a HD5970 card currently passing through using OVMF / VFIO (not pci-stub) to Windows 10, without flashing the card, and it would appear that CrossFire is working as well in Windows.

It is simply enough. Get the VBIOS file from the card itself, use GOPupD, pass through the updated ROM file in the VM XML using rom file=. Flashing should be a last resort (IMHO) due to the danger of bricking. I was going to just use pci-stub, but I do not have that kernel module for one reason or another, so I had to make VFIO work (I'm using Antergos with 4.17.2-1-ARCH #1 SMP PREEMPT kernel).

There are always hidden dangers with flashing that can be avoided by using rom files in the XML. In my case, the HD5970 is dual GPU, and each GPU has a different device IDs and, it seems, slightly different VBIOS files. I did not know this going in (and this is what took me a while). Had I just flashed the same GOPupD VBIOS, I likely would have bricked my card (when I passed the wrong ROM files in the XML to each GPU it crashed the whole system, I would hate to see what it would have done if I flashed it to the card itself).

I know that 8 year old hardware isn't everybody's concern, but the miracle of Linux is the breath of life it provides to older hardware. I'm using an i7 980x on a P6X58D Premium motherboard with 24GB of RAM ... I really don't feel a need to upgrade my rig when it still kicks butt. Eventually I am putting in a new GPU so I can use newer games and this will all seem very trivial to me, but may help other people unable to upgrade.

Also, with my 5970 each GPU was in its own IOMMU group. I am not sure if all dual GPUs are like that, but perhaps telling people that dual GPUs may be split up may help them. I could not get the card to function until I separated the second GPU on that card, gave it to the VM, and gave it the correct GOPupD ROM file. I don't know where to ask this question, but I do wonder if this card could use one GPU for the host and one for the guest (same card, but it has a different IOMMU group for each GPU and has 2 x DVI and a mini-DP). That would be an interesting project. Dracodeus (talk) 09:53, 27 June 2018 (UTC)