https://wiki.archlinux.org/api.php?action=feedcontributions&user=FoXy&feedformat=atomArchWiki - User contributions [en]2024-03-29T10:06:47ZUser contributionsMediaWiki 1.41.0https://wiki.archlinux.org/index.php?title=Gaming&diff=776635Gaming2023-04-29T19:36:54Z<p>FoXy: Add Gamescope</p>
<hr />
<div>[[Category:Gaming]]<br />
[[de:Spiele]]<br />
[[ja:ゲーム]]<br />
[[lt:Games]]<br />
[[zh-hans:Gaming]]<br />
{{Related articles start}}<br />
{{Related|List of games}}<br />
{{Related|Video game platform emulators}}<br />
{{Related|Xorg}}<br />
{{Related|Gamepad}}<br />
{{Related|Wine}}<br />
{{Related articles end}}<br />
<br />
Linux has long been considered an "unofficial" gaming platform; the support and target audience provided to it is not a primary priority for most gaming organizations. Changes to this situation have accelerated, starting from 2021 onward, as big players like [[Wikipedia:Valve Corporation|Valve]], the [[Wikipedia:CodeWeavers|CodeWeavers]] group and the [[Wikipedia:Open-source software development|community]] have made tremendous improvements to the ecosystem, allowing Linux to truly become a viable platform for gaming. Further, more and more indie development teams strive to use cross-platform rendering engines in order to have their game able to compile and run on Linux.<br />
<br />
When it comes to gaming, the majority of user's thoughts are often directed towards popular [[Wikipedia:AAA games|AAA games]] which are usually written for the [[Wikipedia:Microsoft Windows|Microsoft Windows]] platform. This is understandable, however, it is not the only and sole availability. Please refer to [[#Game environments]] and [[#Getting games]] further down the page where you can find software to run games from other platforms.<br />
<br />
If you however are fixated on getting games written for [[Wikipedia:Microsoft Windows|Microsoft Windows]] to work on Linux, then a different mindset, tools and approach is required; understanding internals and providing functional substitution. Please read [[#Game technicality]] below.<br />
<br />
== Game technicality ==<br />
<br />
There are ultimately ''two major problems'' that arise from attempting to play [[Wikipedia:AAA games|AAA games]] on Linux. They are:<br />
<br />
* '''Graphics SDK'''<br />
** Games written and compiled for an API that Linux does not recognize (such as [[Wikipedia:DirectX|DirectX]]).<br />
* '''Graphics Hardware'''<br />
** Drivers necessary to handle game rendering. (such as [[Wikipedia:NVIDIA|NVIDIA]] Drivers)<br />
<br />
From these problems, further ''two complications'' arise, in particular:<br />
<br />
* '''General Library Dependencies'''<br />
** Libraries necessary for doing general purpose operations during gameplay, such as saving in-game, loading config. (e.g Microsoft Visual C++, MFC, .NET)<br />
* '''Incompatible Interfaces'''<br />
** Aside from the frameworks mentioned above, there is a further problem with binary formats and compiled code generated by Windows which Linux does not recognize.<br />
** Lastly, lacking the appropriate driver to do the rendering results in a horseless cart situation.<br />
<br />
The APIs above forward their graphical calls to the underlying driver which then proceeds to talking to the GPU hardware. AMD users fortunately have opensource drivers released by AMD itself. This is already a huge issue resolved. NVIDIA users have to rely on other alternatives, which often comes packed as blobs. (microcode and firmware being fed through, as a result of NVIDIA driver reverse engineering)<br />
<br />
A huge amount of games use [[Wikipedia:DirectX|DirectX]] as their main driving SDK. Linux, natively supports only [[Wikipedia:OpenGL|OpenGL]] and [[Wikipedia:Vulkan (API)|Vulkan]]. Linux by itself does not support [[Wikipedia:DirectX|DirectX]] or any of the aforementioned technologies (Visual C++, MFC, .NET).<br />
<br />
Instead, several opensource equivalents have been written which attempt to provide '''identical''' functionality, ultimately achieving the same result from a graphics point of view. These equivalents have their "own" written substitutes which attempt to "re-invent" what the original SDK calls would possibly achieve from a [[Wikipedia:black box|black box]] point of view. Popular ones include:<br />
<br />
* [[Wine]] (Wine is Not Emulator) [provides a "loader vm", self written dependencies, interop and more]<br />
* [[Proton]] (forked Wine project, optimized for Steam by Valve)<br />
* [[Wikipedia:Mono (software)|Mono]] (.NET alternative)<br />
* [[Wikipedia:Media Foundation|MF-Media]] (media foundation dependencies)<br />
<br />
For example, a call to load, transform and shade vertices on [[Wikipedia:DirectX|DirectX]] may be re-written from scratch in a new ''.dll''/''.so'' owned by Wine, providing their own "hypothetical" belief on what the function may be doing underneath, and forward it instead to an [[Wikipedia:OpenGL|OpenGL]] alternative, effectively trying to achieve similar results. Since these calls are direct equivalents and treated "as if" DirectX was running, performance is not impacted. (with the exception of the starting overhead to interop with these)<br />
<br />
These tools are often brought in the distribution together on the system at the same time. A '''prefix''' (Wine's terminology for a directory mimicking a Windows sandbox) is created and configured. Dependencies are installed '''inside the prefix''' (the "sandbox" still needs the game's [[Wikipedia:Library (computing)|redistributables]]), often with [https://wiki.winehq.org/Winetricks winetricks], followed by an attempt to run the game "as if" it was executed from Windows.<br />
<br />
This, nowadays, fortunately works for most games (aside from [https://www.keengamer.com/articles/features/opinion-pieces/kernel-level-anti-cheat-and-7-games-or-programs-that-use-it/ anti-cheat protected ones], which require a kernel driver that Wine/Proton does not yet have). If a game does not work, it is usually as a result of incompatible packages, missing dependencies or unimplemented functionality by Wine/Proton.<br />
<br />
Lutris is a piece of software that provides runners and sandboxes that handle dependencies for you when you install games, if the above process is found tedious and/or complicated.<br />
<br />
== Common game dependencies ==<br />
<br />
In order to gain a more in-depth understanding of what you will intend to do if you decide to go the Wine/Proton route, it is worthwhile to cover the common dependencies that games require in order to execute. Architecture also needs to be considered in mind, whether x86 or x64, preferably both.<br />
<br />
A prefix would need to have the following populated into it in order to run '''most''' Windows games.<br />
<br />
=== Mandatory (for high coverage) ===<br />
<br />
{{Style|Links to wiki articles that contain package links (e.g. [[Fonts]]) should be used here}}<br />
<br />
* [[Microsoft fonts|Microsoft Core Fonts]]<br />
* Microsoft Visual C++ 2015 (2017 has the most coverage, recommended) [2005, 2008, 2010, 2012, 2013, 2015, 2017-2018, 2019]<br />
* DirectX 9.0 (11.0 has the most coverage, recommended) [June SDK update 2010] {which consists of, to name a few:}<br />
** Direct3D<br />
** Direct2D<br />
** DirectShow<br />
** DirectInput<br />
** DirectPlay<br />
** DirectSound<br />
** DXGI<br />
** XAudio2<br />
* .NET Framework (3.5 has most coverage)<br />
* [[OpenGL]]<br />
** OpenAL<br />
** OpenAI<br />
** OpenCL<br />
* [[Vulkan]]<br />
<br />
=== Optional (but still common) ===<br />
<br />
* XNA<br />
* PhysX<br />
* Media Foundation<br />
* Quicktime<br />
* Adobe Reader 11<br />
* [[Java]] SRE (e.g for [[Minecraft]])<br />
<br />
=== Rare (less common) ===<br />
<br />
* Gamespy<br />
* MIDI driver<br />
* ACDSee<br />
<br />
== Machine requirements ==<br />
<br />
It is not enough to just populate a prefix with the dependencies the game will need. The kernel itself has to have the substitution it will provide to the calls the game will make. As already mentioned, drivers and alternatives are available.<br />
<br />
=== Drivers ===<br />
<br />
* AMD drivers: see [[AMDGPU]].<br />
* Intel drivers: see [[Intel graphics]].<br />
* NVIDIA drivers: see [[NVIDIA]].<br />
<br />
=== Dependency for the machine & substitutes ===<br />
<br />
{{Note|This is mostly informative. Some of these packages install themselves once you install the major ones.}}<br />
<br />
* [[Wine]]<br />
* {{Pkg|wine-gecko}}<br />
* {{Pkg|wine-mono}}<br />
* [[Vulkan]]<br />
* [[OpenGL]]<br />
* [[Proton]] Redistributables (optional, but it may help)<br />
* {{AUR|wine-ge-custom}} or [https://github.com/Tk-Glitch/PKGBUILDS/tree/master/wine-tkg-git TKG] (optional, unless unsuccessful) : specially compiled wine versions containing patches for certain games.<br />
<br />
== Game environments ==<br />
<br />
Wine/Proton are not the only approaches to play games. Different environments exist to play games in Linux, and have just as many (or more) games than on Windows:<br />
<br />
* Native – games which have builds targeting the Linux platform, shipping with OpenGL and/or Vulkan graphics API support.<br />
* [[Wikipedia:Emulator|Emulator]]s – required for running software designed for other architectures and systems. Most games run out of the box once the ROM is fed in to the emulator and issues are rarely encountered. For options, see [[Video game platform emulators]].<br />
* [[Java]] - write once, run everywhere platform. Examples of popular games that run on Linux are [[Minecraft]], [[RuneScape|Runescape]], [[Wikipedia:Wurm_Online|Wurm Online]], [[Wikipedia:Puzzle_Pirates|Puzzle Pirates]].<br />
* Web – games running in a web browser.<br />
** HTML5 games use canvas and WebGL technologies and work in all modern browsers.<br />
** [[Flash]]-based – you need to install the plugin to play.<br />
* [[Wine]] – Windows compatibility layer, allows to run Windows applications (and a lot of games) on Unix-like operating systems. Supports DirectX to Vulkan translation in runtime with the addition of [[Wine#DXVK]], which improves performance in games which only support DirectX.<br />
* [[Virtual machine]]s – can be used to install compatible operating systems (such as Windows). [[VirtualBox]] has good 3D support. As an extension of this, if you have compatible hardware you can consider VGA passthrough to a Windows KVM guest, keyword is [https://docs.kernel.org/driver-api/vfio.html "virtual function I/O" (VFIO)], or [[PCI passthrough via OVMF]].<br />
* [https://steamcommunity.com/games/221410/announcements/detail/1696055855739350561 Proton/DXVK] – Fork of Wine designed for use in the proprietary {{Pkg|steam}} platform, enabling better support for games than Wine. See [[Steam#Proton Steam-Play]] for more information. <br />
<br />
== Game compatibility ==<br />
<br />
=== Increase vm.max_map_count ===<br />
<br />
Having the default {{ic|vm.max_map_count}} size limit of 65530 maps can be too little for some games [https://www.phoronix.com/news/Fedora-39-VM-Max-Map-Count]. Therefore increase the size permanently by creating the [[sysctl]] config file:<br />
<br />
{{hc|/etc/sysctl.d/80-gamecompatibility.conf|2=<br />
vm.max_map_count = 2147483642<br />
}}<br />
<br />
Apply the changes without reboot by running:<br />
<br />
# sysctl --system<br />
<br />
{{Note|This can lead to incompatibility with older programs trying to read core dump files [https://github.com/torvalds/linux/blob/v5.18/include/linux/mm.h#L178].}}<br />
<br />
== Getting games ==<br />
<br />
Just because games are available for Linux does not mean that they are native; they might be pre-packaged with [[Wine]] or [[DOSBox]].<br />
<br />
For list of games packaged for Arch in [[official repositories]] or the [[AUR]] see [[List of games]].<br />
<br />
* {{App|Athenaeum|A libre replacement to Steam.|https://gitlab.com/librebob/athenaeum|{{AUR|athenaeum-git}}}}<br />
* {{App|Flathub|Central [[Flatpak]] repository, has small but growing game section.|https://flathub.org/apps/category/Game|{{Pkg|flatpak}}, {{Pkg|discover}}, {{Pkg|gnome-software}}}}<br />
* {{App|[[Wikipedia:GOG.com|GOG.com]]|DRM-free game store.|https://www.gog.com|{{AUR|lgogdownloader}}, {{AUR|wyvern}}, {{AUR|minigalaxy}}}}<br />
* {{App|Heroic Games Launcher|A GUI for GOG and legendary, an open-source alternative for the Epic Games Launcher.|https://github.com/Heroic-Games-Launcher/HeroicGamesLauncher|{{AUR|heroic-games-launcher-bin}}}}<br />
* {{App|[[Wikipedia:itch.io|itch.io]]|Indie game store.|https://itch.io|{{AUR|itch-setup-bin}}}}<br />
* {{App|Legendary|A free and open-source replacement for the Epic Games Launcher.|https://github.com/derrod/legendary|{{AUR|legendary}}}}<br />
* {{App|[[Wikipedia:Lutris|Lutris]]|Open gaming platform for Linux. Gets games from GOG, Steam, Battle.net, Origin, Uplay and many other sources. Lutris utilizes various [https://lutris.net/runners runners] to launch the games with fully customizable configuration options.|https://lutris.net|{{Pkg|lutris}}}}<br />
* {{App|Play.it|Automates the build of native packages. Also supports [[Wine]], [[DOSBox]] and ScummVM games.|https://www.dotslashplay.it/|{{AUR|play.it}}}}<br />
* {{App|Rare|Another GUI for legendary, based on PyQt5.|https://github.com/Dummerle/Rare|{{AUR|rare}}}}<br />
* {{App|[[Steam]]|Digital distribution and communications platform developed by Valve.|https://store.steampowered.com|{{Pkg|steam}}}}<br />
<br />
For Wine wrappers, see [[Wine#Third-party applications]].<br />
<br />
== Configuring games ==<br />
<br />
Certain games or game types may need special configuration to run or to run as expected. For the most part, games will work right out of the box in Arch Linux with possibly better performance than on other distributions due to compile time optimizations. However, some special setups may require a bit of configuration or scripting to make games run as smoothly as desired.<br />
<br />
=== Multi-screen setups ===<br />
<br />
Running a multi-screen setup may lead to problems with full-screen games. In such a case, [[#Starting games in a separate X server|running a second X server]] is one possible solution. For NVIDIA users, a solution may be found in [[NVIDIA#Gaming using TwinView]].<br />
<br />
=== Keyboard grabbing ===<br />
<br />
Many games grab the keyboard, noticeably preventing you from switching windows (also known as alt-tabbing).<br />
<br />
Some SDL games (e.g. Guacamelee) let you disable grabbing by pressing {{ic|Ctrl-g}}.<br />
<br />
{{Note|SDL is known to sometimes not be able to grab the input system. In such a case, it may succeed in grabbing it after a few seconds of waiting.}}<br />
<br />
=== Starting games in a separate X server ===<br />
<br />
In some cases like those mentioned above, it may be necessary or desired to run a second X server. Running a second X server has multiple advantages such as better performance, the ability to "tab" out of your game by using {{ic|Ctrl+Alt+F7}}/{{ic|Ctrl+Alt+F8}}, no crashing your primary X session (which may have open work on) in case a game conflicts with the graphics driver. The new X server will be akin a remote access login for the ALSA, so your user need to be part of the {{ic|audio}} group to be able to hear any sound.<br />
<br />
To start a second X server (using the free first person shooter game [https://www.xonotic.org/ Xonotic] as an example) you can simply do: <br />
<br />
$ xinit /usr/bin/xonotic-glx -- :1 vt$XDG_VTNR<br />
<br />
This can further be spiced up by using a separate X configuration file:<br />
<br />
$ xinit /usr/bin/xonotic-glx -- :1 -xf86config xorg-game.conf vt$XDG_VTNR<br />
<br />
A good reason to provide an alternative ''xorg.conf'' here may be that your primary configuration makes use of NVIDIA's Twinview which would render your 3D games like Xonotic in the middle of your multiscreen setup, spanned across all screens. This is undesirable, thus starting a second X with an alternative configuration where the second screen is disabled is advised. Please note, that the X configuration file location is relative to the {{ic|/etc/X11}} directory.<br />
<br />
A game starting script making use of Openbox for your home directory or {{ic|/usr/local/bin}} may look like this:<br />
<br />
{{hc|~/game.sh|<nowiki><br />
if [ $# -ge 1 ]; then<br />
game="$(which $1)"<br />
openbox="$(which openbox)"<br />
tmpgame="/tmp/tmpgame.sh"<br />
DISPLAY=:1.0<br />
echo -e "${openbox} &\n${game}" > ${tmpgame}<br />
echo "starting ${game}"<br />
xinit ${tmpgame} -- :1 -xf86config xorg-game.conf || exit 1<br />
else<br />
echo "not a valid argument"<br />
fi<br />
</nowiki>}}<br />
<br />
After making it [[executable]] you would be able to do:<br />
<br />
$ ~/game.sh xonotic-glx<br />
<br />
{{Note|If you want to avoid loading configs from {{ic|/etc/X11/xorg.conf.d}}, you should also use the {{ic|-configdir}} option, pointing to an empty directory.}}<br />
<br />
=== Adjusting mouse detections ===<br />
<br />
For games that require exceptional amount of mouse skill, adjusting the [[mouse polling rate]] can help improve accuracy.<br />
<br />
=== Binaural audio with OpenAL ===<br />
<br />
For games using [[Wikipedia:OpenAL|OpenAL]], if you use headphones you may get much better positional audio using OpenAL's [[Wikipedia:Head-related transfer function|HRTF]] filters. To enable, [[create]]:<br />
<br />
{{hc|~/.alsoftrc|2=<br />
hrtf = true<br />
}}<br />
<br />
Alternatively, install {{AUR|openal-hrtf}} from the AUR, and edit the options in {{ic|/etc/openal/alsoftrc.conf}}.<br />
<br />
For Source games, the ingame setting `dsp_slow_cpu` must be set to `1` to enable HRTF, otherwise the game will enable its own processing instead. You will also either need to set up Steam to use native runtime, or link its copy of openal.so to your own local copy. For completeness, also use the following options:<br />
<br />
dsp_slow_cpu 1 # Disable in-game spatialiazation<br />
snd_spatialize_roundrobin 1 # Disable spatialization 1.0*100% of sounds<br />
dsp_enhance_stereo 0 # Disable DSP sound effects. You may want to leave this on, if you find it does not interfere with your perception of the sound effects.<br />
snd_pitchquality 1 # Use high quality sounds<br />
<br />
=== Tuning PulseAudio ===<br />
<br />
If you are using [[PulseAudio]], you may wish to tweak some default settings to make sure it is running optimally.<br />
<br />
==== Enabling realtime priority and negative nice level ====<br />
<br />
Pulseaudio is built to be run with realtime priority, being an audio daemon. However, because of security risks of it locking up the system, it is scheduled as a regular thread by default. To adjust this, first make sure you are in the {{ic|audio}} group. Then, uncomment and edit the following lines in {{ic|/etc/pulse/daemon.conf}}:<br />
<br />
{{hc|1=/etc/pulse/daemon.conf|2=<br />
high-priority = yes<br />
nice-level = -11<br />
<br />
realtime-scheduling = yes<br />
realtime-priority = 5}}<br />
<br />
and restart pulseaudio.<br />
<br />
==== Using higher quality remixing for better sound ====<br />
<br />
PulseAudio on Arch uses speex-float-1 by default to remix channels, which is considered a 'medium-low' quality remixing. If your system can handle the extra load, you may benefit from setting it to one of the following instead:<br />
<br />
resample-method = speex-float-10<br />
<br />
==== Matching hardware buffers to Pulse's buffering ====<br />
<br />
Matching the buffers can reduce stuttering and increase performance marginally. See [https://forums.linuxmint.com/viewtopic.php?f=42&t=44862 here] for more details.<br />
<br />
== Remote gaming ==<br />
<br />
[[Wikipedia:Cloud gaming|Cloud gaming]] has gained a lot of popularity in the last few years, because of low client-side hardware requirements. The only important thing is stable internet connection (over the ethernet cable or 5 GHz WiFi recommended) with a minimum speed of 5–10 Mbit/s (depending on the video quality and framerate).<br />
<br />
See [[Gamepad#Gamepad over network]] for using a gamepad over a network with services that do not normally support this.<br />
<br />
{{Note|Most of the services that work in browser usually mean to be only compatible with {{AUR|google-chrome}}.}}<br />
<br />
{| class="wikitable sortable" style="text-align: center;"<br />
! Service<br />
! class="unsortable" | Installer<br />
! In browser client<br />
! Use your own host<br />
! Offers host renting<br />
! Full desktop support<br />
! Controller support<br />
! class="unsortable" | Remarks<br />
|-<br />
| [https://dixper.gg/ Dixper] || {{-}} || {{Yes}} || {{Y|Windows-only}} || ? || ? || ? || {{-}}<br />
|-<br />
| [https://reemo.io/ Reemo] || {{AUR|reemod-bin}} || {{Y|Chromium based only}} || {{Yes}} || {{Yes}} || {{Yes}} || {{Y|Windows-only}} || You can also install the software with the official installation script in the download section via their website. <br />
|-<br />
| [https://xbox.com/play Xbox Cloud] || {{AUR|xbox-cloud-gaming}} || {{Yes}} || {{No}} || {{No}} || {{No}} || {{Yes}} || You need Game Pass Ultimate to be able to use XCloud.<br />
|-<br />
| [[GeForce Now]] || {{-}} || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || You must have games on Steam, Epic Client or GOG to use this service. Not all games are available.<br />
|-<br />
| [https://moonlight-stream.org/ Moonlight] || {{AUR|moonlight-qt}} || {{No}} || {{Yes}} || {{No}} || {{Yes}} || {{Yes}} || This is only a client. Host machine must use either GeForce Experience (windows only) or [https://github.com/SunshineStream/Sunshine Sunshine] (multiplatform).<br />
|-<br />
| [https://parsec.app/ Parsec] || {{AUR|parsec-bin}} || {{Yes}} (experimental) || {{Y|Windows-only}} || {{No}} || {{Yes}} || {{Yes}} || Cloud hosting [https://support.parsecgaming.com/hc/en-us/articles/360031038112-Cloud-Computer-Update no longer available]<br />
|-<br />
| [https://github.com/mbroemme/vdi-stream-client VDI Stream Client] || {{AUR|vdi-stream-client}} || {{No}} || {{Y|Windows-only}} || {{No}} || {{Yes}} || {{No}} || VDI client with 3D GPU acceleration and built-in USB redirection<br />
|-<br />
| [https://playkey.net/ Playkey] || {{AUR|playkey-linux}} || ? || ? || ? || ? || ? || {{-}}<br />
|-<br />
| style="white-space:nowrap" | [https://www.playstation.com/en-gb/ps-now/ps-now-on-pc/ PlayStation Now] || Runs under [[Wine]] or [[Steam]]'s proton || {{No}} || {{No}} || {{-}} || {{No}} || {{Yes}} || Play PS4, PS3 and PS2 games on PC. Alternatively, you can use [[Video game platform emulators|emulators]].<br />
|-<br />
| style="white-space:nowrap" | [https://www.playstation.com/en-us/remote-play/ PlayStation Remote Play] || {{AUR|chiaki}} || {{No}} || {{Yes}} || {{-}} || {{Yes}} || {{Yes}} || Play games from your PS4 and/or PS5 on PC.<br />
|-<br />
| [https://rainway.com/ Rainway] || Coming in 2019 Q3 || {{Yes}} || {{Y|Windows-only}} || {{No}} || {{Yes}} || ? || {{-}}<br />
|-<br />
| [https://shadow.tech/ Shadow] || '''Stable:''' {{AUR|shadow-tech}} <br> '''Beta''': {{AUR|shadow-beta}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || {{Yes}} || Controller support is dependent on USB over IP, and currently AVC only as HEVC is not supported<br />
|-<br />
| [[Steam#Steam Remote Play|Steam Remote Play]] || Part of {{pkg|steam}} || {{No}} || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{-}}<br />
|-<br />
|-<br />
| [https://stadia.google.com Stadia] || {{-}} || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || Service is shutting down on 18/01/2023<br />
|-<br />
| [https://vortex.gg/ Vortex] || {{-}} || {{Yes}} || {{No}} || {{-}} || {{No}} || ? || {{-}}<br />
|-<br />
| [[VNC]] || {{pkg|tigervnc}} or {{pkg|x11vnc}} || {{No}} || {{Yes}} || {{No}} || {{Yes}} || {{No}} || General purpose remote desktop protocol, but the latency should be low enough to use it for gaming over a LAN. See [[Gamepad#Gamepad over network]] for gamepad support.<br />
|-<br />
| [[xrdp]] || {{AUR|xrdp}} || {{No}} || {{Yes}} || {{No}} || {{Yes}} || {{No}} || Another general purpose remote desktop protocol, has both OpenGL and Vulkan support after configuring [[xrdp#Graphical acceleration|graphical accleration]]. Recommended for gaming over a LAN. See [[Gamepad#Gamepad over network]] for gamepad support.<br />
|-<br />
| [[X11 forwarding]] || {{pkg|openssh}} || {{No}} || {{Yes}} || {{No}} || {{No}} || {{No}} || X forwarding over SSH with [[VirtualGL]] supports OpenGL and works for some but not all games. See [[Gamepad#Gamepad over network]] for gamepad support.<br />
|-<br />
| [https://boosteroid.com/ Boosteroid] || {{AUR|boosteroid}} || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || You must have games on a digital distribution platform (Steam, EGS, Origin, etc.) to use this service. Not all games are available. You need to sign up (free) to see the full list of games. You need to purchase a subscription, to launch the games you own on the digital distribution platform.<br />
|-<br />
| [https://www.blacknut.com/ Blacknut] || {{AUR|blacknut-appimage}} or [https://www.blacknut.com/en/download/linux Blacknut AppImage] || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || You need a subscription to be able to use this service. Not all games are available.<br />
|}<br />
<br />
== Improving performance ==<br />
<br />
See also main article: [[Improving performance]]. For Wine programs, see [[Wine#Performance]]. For a good gaming experience low latency, a consistent response time (no jitter) and enough throughput (frames per second) are needed.<br />
If you have multiple sources with a little jitter they are likely to overlap sometimes and produce noticeable stutter. Therefore most of the time it is preferable to decrease the throughput a little to gain more response time consistency.<br />
<br />
=== Improve clock_gettime throughput ===<br />
<br />
User space programs and especially games do many calls to {{man|2|clock_gettime}} to get the current time for calculating game physics, fps and so on. The time usage can be seen by running <br />
<br />
# perf top<br />
<br />
and looking at the overhead of read_hpet (or acpi_pm_read).<br />
<br />
If you are not dependent on a very precise timer you can switch from hpet (high precision event timer) or acpi_pm (ACPI Power Management Timer) to the faster TSC (time stamp counter) timer. Add the [[kernel parameters]] <br />
<br />
tsc=reliable clocksource=tsc<br />
<br />
to make TSC available and enable it. After that reboot and confirm the clocksource by running <br />
<br />
# cat /sys/devices/system/clocksource/clocksource*/current_clocksource<br />
<br />
You can see all currently available timers by running<br />
<br />
# cat /sys/devices/system/clocksource/clocksource*/available_clocksource<br />
<br />
and change between them by echoing one into current_clocksource. On a Zen 3 system benchmarking with [https://gist.github.com/weirddan455/eb807fa48915652abeca3b6421970ab4] shows a ~50 times higher throughput of {{ic|tsc}} compared to {{ic|hpet}} or {{ic|acpi_pm}}.<br />
<br />
=== Tweaking kernel parameters for response time consistency ===<br />
<br />
You can install the [[realtime kernel]] to get very good response time consistency out of the box at the price of quite some CPU throughput loss. Additionally the realtime kernel is not compatible with {{Pkg|nvidia-open-dkms}} and does not change the scheduler for SCHED_NORMAL (also named SCHED_OTHER) processes, which is the default process scheduling type. The following kernel parameter changes improve the response time consistency for the realtime kernel even further, as well as other kernels such as the default linux kernel:<br />
<br />
Disable proactive compaction because it introduces jitter according to [https://docs.kernel.org/admin-guide/sysctl/vm.html kernel documentation]:<br />
<br />
# echo 0 > /proc/sys/vm/compaction_proactiveness<br />
<br />
If you have enough free RAM increase the number of minimum free Kilobytes to avoid stalls on memory allocations: [http://highscalability.com/blog/2015/4/8/the-black-magic-of-systematically-reducing-linux-os-jitter.html][https://docs.kernel.org/admin-guide/sysctl/vm.html]. Do not set this below 1024 KB or above 5% of your systems memory. Reserving 1GB:<br />
<br />
# echo 1048576 > /proc/sys/vm/min_free_kbytes<br />
<br />
Avoid swapping (locking pages that introduces latency and uses disk IO) unless the system has no more free memory:<br />
<br />
# echo 10 > /proc/sys/vm/swappiness<br />
<br />
Enable Multi-Gen Least Recently Used (MGLRU) but reduce the likelihood of lock contention at a minor performance cost [https://docs.kernel.org/admin-guide/mm/multigen_lru.html]:<br />
<br />
# echo 5 > /sys/kernel/mm/lru_gen/enabled<br />
<br />
Disable zone reclaim (locking and moving memory pages that introduces latency spikes):<br />
<br />
# echo 0 > /proc/sys/vm/zone_reclaim_mode<br />
<br />
Disable Transparent Hugepages (THP). Even if defragmentation is disabled, THPs might introduce latency spikes. [https://docs.kernel.org/admin-guide/mm/transhuge.html][https://alexandrnikitin.github.io/blog/transparent-hugepages-measuring-the-performance-impact/] <br />
<br />
{{bc|<br />
# echo never > /sys/kernel/mm/transparent_hugepage/enabled<br />
# echo never > /sys/kernel/mm/transparent_hugepage/shmem_enabled<br />
# echo 0 > /sys/kernel/mm/transparent_hugepage/khugepaged/defrag<br />
}}<br />
<br />
Note that if your game uses TCMalloc (e.g., Dota 2 and CS:GO) then it is not recommended to disable THP as it comes with a large performance cost [https://github.com/google/tcmalloc/blob/master/docs/tuning.md#system-level-optimizations]. Instead if you enable THP, you might also want to enable proactive compaction [https://nitingupta.dev/post/proactive-compaction/]. <br />
<br />
Reduce the maximum page lock acquisition latency while retaining adequate throughput [https://www.phoronix.com/review/linux-59-unfairness][https://openbenchmarking.org/result/2009154-FI-LINUX58CO57&sro][https://www.phoronix.com/review/linux-59-fairness]:<br />
<br />
# echo 1 > /proc/sys/vm/page_lock_unfairness<br />
<br />
Tweak the scheduler settings. The following scheduler settings are in conflict with {{AUR|cfs-zen-tweaks}} so for each setting choose only one provider. By default the linux kernel scheduler is optimized for throughput and not latency. The following hand-made settings change that and are tested with different games to be a noticeable improvement. They might not be optimal for your use case; consider modifying them as necessary [https://access.redhat.com/solutions/177953][https://doc.opensuse.org/documentation/leap/tuning/html/book-tuning/cha-tuning-taskscheduler.html]: <br />
<br />
{{bc|<br />
# echo 0 > /proc/sys/kernel/sched_child_runs_first<br />
# echo 1 > /proc/sys/kernel/sched_autogroup_enabled<br />
# echo 500 > /proc/sys/kernel/sched_cfs_bandwidth_slice_us<br />
# echo 1000000 > /sys/kernel/debug/sched/latency_ns<br />
# echo 500000 > /sys/kernel/debug/sched/migration_cost_ns<br />
# echo 500000 > /sys/kernel/debug/sched/min_granularity_ns<br />
# echo 0 > /sys/kernel/debug/sched/wakeup_granularity_ns<br />
# echo 8 > /sys/kernel/debug/sched/nr_migrate<br />
}}<br />
<br />
==== Make the changes permanent ====<br />
<br />
Usually, the advice for permanently setting [[kernel parameters]] is to configure create a [[sysctl]] configuration file or change your [[boot loader]] options. However, since our change span both procfs ({{ic|/proc}}, containing sysctl) and sysfs ({{ic|/sys}}), the most convenient way is to use [[systemd-tmpfiles]]:<br />
<br />
{{hc|/etc/tmpfiles.d/consistent-response-time-for-gaming.conf|<br />
# Path Mode UID GID Age Argument<br />
w /proc/sys/vm/compaction_proactiveness - - - - 0<br />
w /proc/sys/vm/min_free_kbytes - - - - 1048576<br />
w /proc/sys/vm/swappiness - - - - 10<br />
w /sys/kernel/mm/lru_gen/enabled - - - - 5<br />
w /proc/sys/vm/zone_reclaim_mode - - - - 0<br />
w /sys/kernel/mm/transparent_hugepage/enabled - - - - never<br />
w /sys/kernel/mm/transparent_hugepage/shmem_enabled - - - - never<br />
w /sys/kernel/mm/transparent_hugepage/khugepaged/defrag - - - - 0<br />
w /proc/sys/vm/page_lock_unfairness - - - - 1<br />
w /proc/sys/kernel/sched_child_runs_first - - - - 0<br />
w /proc/sys/kernel/sched_autogroup_enabled - - - - 1<br />
w /proc/sys/kernel/sched_cfs_bandwidth_slice_us - - - - 500<br />
w /sys/kernel/debug/sched/latency_ns - - - - 1000000<br />
w /sys/kernel/debug/sched/migration_cost_ns - - - - 500000<br />
w /sys/kernel/debug/sched/min_granularity_ns - - - - 500000<br />
w /sys/kernel/debug/sched/wakeup_granularity_ns - - - - 0<br />
w /sys/kernel/debug/sched/nr_migrate - - - - 8<br />
}}<br />
<br />
After that reboot and see if the values applied correctly.<br />
<br />
=== Load shared objects immediately for better first time latency ===<br />
<br />
Set the [[environment variable]]<br />
<br />
LD_BIND_NOW=1<br />
<br />
for your games, to avoid needing to load program code at run time (see {{man|8|ld.so}}), leading to a delay the first time a function is called. Do not set this for ''startplasma-x11'' or other programs that link in libraries that do not actually exist on the system anymore and are never called by the program. If this is the case, the program fails on startup trying to link a nonexistent shared object, making this issue easily identifiable. Most games should start fine with this setting enabled.<br />
<br />
=== Utilities ===<br />
<br />
==== Gamemode ====<br />
<br />
[[Gamemode]] is daemon and library combo that allows games to request a set of optimisations be temporarily applied to the host OS. This can improve game performance.<br />
<br />
==== Gamescope ====<br />
<br />
[[Gamescope]] is a microcompositor from Valve that is used on the Steam Deck. Its goal is to provide an isolated compositor that is tailored towards gaming and supports many gaming-centric features.<br />
<br />
=== ACO compiler ===<br />
<br />
{{Note|The method shown below '''only''' works on AMD GPUs running the '''[[AMDGPU]]''' drivers.}}<br />
See [[AMDGPU#ACO compiler]]<br />
<br />
=== fsync patch ===<br />
<br />
{{Remove|See target page.}}<br />
<br />
See [[Steam#fsync patch]].<br />
<br />
=== Reducing DRI latency ===<br />
<br />
Direct Rendering Infrastructure (DRI) Configuration Files apply for all DRI drivers including Mesa and Nouveau. You can change the DRI configuration systemwide in {{ic|/etc/drirc}} or per user in {{ic|$HOME/.drirc}}. If they do not exist, you have to create them first. Both files use the same syntax; documentation for these options can be found at https://dri.freedesktop.org/wiki/ConfigurationOptions/. To reduce input latency by disabling synchronization to vblank, add the following:<br />
<br />
<driconf><br />
<device><br />
<application name="Default"><br />
<option name="vblank_mode" value="0" /><br />
</application><br />
</device><br />
</driconf><br />
<br />
=== Improving frame rates and responsiveness with scheduling policies ===<br />
<br />
Most games can benefit if given the correct scheduling policies for the kernel to prioritize the task. These policies should ideally be set per-thread by the application itself.<br />
<br />
For programs which do not implement scheduling policies on their own, application known as {{Pkg|schedtool}}, and its associated daemon {{AUR|schedtoold}} can handle many of these tasks automatically.<br />
<br />
To edit what programs relieve what policies, simply edit {{ic|/etc/schedtoold.conf}} and add the program followed by the ''schedtool'' arguments desired.<br />
<br />
==== Policies ====<br />
<br />
{{ic|SCHED_ISO}} (only implemented in BFS/MuQSSPDS schedulers found in -pf and -ck [[kernel]]s) – will not only allow the process to use a maximum of 80 percent of the CPU, but will attempt to reduce latency and stuttering wherever possible. Most if not all games will benefit from this:<br />
<br />
bit.trip.runner -I<br />
<br />
{{ic|SCHED_FIFO}} provides an alternative, that can even work better. You should test to see if your applications run more smoothly with {{ic|SCHED_FIFO}}, in which case by all means use it instead. Be warned though, as {{ic|SCHED_FIFO}} runs the risk of starving the system! Use this in cases where -I is used below:<br />
<br />
bit.trip.runner -F -p 15<br />
<br />
==== Nice levels ====<br />
<br />
Secondly, the nice level sets which tasks are processed first, in ascending order. A nice level of -4 is recommended for most multimedia tasks, including games:<br />
<br />
bit.trip.runner -n -4<br />
<br />
==== Core affinity ====<br />
<br />
There is some confusion in development as to whether the driver should be multithreading, or the program. Allowing both the driver and program to simultaneously multithread can result in significant performance reductions, such as framerate loss and increased risk of crashes. Examples of this include a number of modern games, and any Wine program which is running with [[Wikipedia:OpenGL Shading Language|GLSL]] enabled. To select a single core and allow only the driver to handle this process, simply use the {{ic|-a 0x''#''}} flag, where ''#'' is the core number, e.g.:<br />
<br />
bit.trip.runner -a 0x1<br />
<br />
uses first core.<br />
<br />
Some CPUs are hyperthreaded and have only 2 or 4 cores but show up as 4 or 8, and are best accounted for:<br />
<br />
bit.trip.runner -a 0x5<br />
<br />
which use virtual cores 0101, or 1 and 3.<br />
<br />
==== General case ====<br />
<br />
For most games which require high framerates and low latency, usage of all of these flags seems to work best. Affinity should be checked per-program, however, as most native games can understand the correct usage.<br />
For a general case:<br />
<br />
bit.trip.runner -I -n -4<br />
Amnesia.bin64 -I -n -4<br />
hl2.exe -I -n -4 -a 0x1 #Wine with GLSL enabled<br />
<br />
etc.<br />
<br />
==== Optimus, and other helping programs ====<br />
<br />
As a general rule, any other process which the game requires to operate should be reniced to a level above that of the game itself. Strangely, Wine has a problem known as ''reverse scheduling'', it can often have benefits when the more important processes are set to a higher nice level. Wineserver also seems unconditionally to benefit from {{ic|SCHED_FIFO}}, since it rarely consumes the whole CPU and needs higher prioritization when possible.<br />
<br />
optirun -I -n -5<br />
wineserver -F -p 20 -n 19<br />
steam.exe -I -n -5<br />
<br />
== Peripherals ==<br />
<br />
=== Mouse ===<br />
<br />
You might want to set your [[mouse acceleration]] to control your mouse more accurately.<br />
<br />
If your mouse has more than 3 buttons, you might want to see [[Mouse buttons]].<br />
<br />
If you are using a gaming mouse (especially Logitech and Steelseries) you may want to configure your mouse [[mouse polling rate|polling rate]], DPI, LEDs etc. using {{Pkg|piper}}. See [https://github.com/libratbag/libratbag/tree/master/data/devices this page] for a full list of supported devices by piper. Alternatively {{Pkg|solaar}} for logitech devices.<br />
<br />
=== LEDs ===<br />
<br />
You can change the motherboard and ram lighting with {{AUR|openrgb}}<br />
<br />
== See also ==<br />
<br />
* [https://www.reddit.com/r/linux_gaming/] - Forum on reddit.com with gaming on linux as its topic, subpages: [https://www.reddit.com/r/linux_gaming/wiki/index Wiki], [https://www.reddit.com/r/linux_gaming/wiki/faq FAQ].<br />
* [https://github.com/AdelKS/LinuxGamingGuide Linux Gaming Guide] - Compilation of different techniques for optimizing the Linux gaming experience.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732398QEMU/Guest graphics acceleration2022-06-10T19:58:13Z<p>FoXy: Some hints</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's alternative) for simplifying the GPU Virtualization. It support Intel (Intel GVT-g, SR-IOV), NVIDIA (Nvidia VGPU, SR-IOV) and AMD (AMD SR-IOV).<br />
You have to create YAML configurations for each virtual machine. Currently Intel and NVIDIA GPUs are tested, with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles Wiki].<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME (''L''IME ''I''s ''M''ediated ''E''mulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that through YAML configuration.<br />
<br />
{{Tip|In case you have Ryzen CPU, You have to enable ignore_msrs to avoid Windows BSOD. Always double check your guest driver version. for Nvidia GPU, Make sure nvidia-vgpud and nvidia-vgpu-mgr services are running!}}<br />
<br />
==== NVIDIA vGPU ====<br />
<br />
By default NVIDIA disabled the vGPU for consumer series (if you own an enterprise card [https://documentation.suse.com/sles/15-SP3/html/SLES-all/article-nvidia-vgpu.html go ahead]). However you can [https://krutavshah.github.io/GPU_Virtualization-Wiki/overview.html unlock VGPU] for your consumer card.<br />
<br />
You will also need a [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU license], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds].<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually setup a Windows 10 guest with NVIDIA VGPU.<br />
<br />
==== SR-IOV ====<br />
<br />
''S''ingle ''R''oot ''I/O'' ''V''irtualization is under development by Intel and NVIDIA New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (Broadwell and newer). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=Ryzen&diff=732297Ryzen2022-06-10T08:40:15Z<p>FoXy: additional solution to freezing.</p>
<hr />
<div>[[Category:CPU]]<br />
[[ja:Ryzen]]<br />
[[zh-hans:Ryzen]]<br />
{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related|Improving performance/Boot process}}<br />
{{Related|Kernel}}<br />
{{Related|Microcode}}<br />
{{Related articles end}}<br />
<br />
== Enable microcode support ==<br />
<br />
[[Install]] the {{Pkg|amd-ucode}} package to enable microcode updates and enable it with the help of the [[Microcode]] page. These updates provide bug fixes that can be critical to the stability of your system. It is '''highly recommended''' to use it despite it being proprietary.<br />
<br />
== Tweaking Ryzen ==<br />
<br />
=== Voltage, power and temperature monitoring ===<br />
<br />
{{Pkg|lm_sensors}} should be able to monitor temperatures out of the box. However, for more detailed information such as power consumption and voltage, {{AUR|zenpower-dkms}} is needed. For GUI based monitoring tools, use {{AUR|zenmonitor}} or {{AUR|zenmonitor3-git}} for Zen 3 CPUs.<br />
<br />
=== Power management, undervolting and overclocking ===<br />
<br />
* {{App|RyzenAdj|RyzenAdj is a command-line tool that can adjust power management settings for Ryzen mobile processors.|https://github.com/FlyGoat/RyzenAdj|{{AUR|ryzenadj-git}}}}<br />
* {{App|Ryzen Controller|Ryzen Controller is a GUI for RyzenAdj.|https://gitlab.com/ryzen-controller-team/ryzen-controller|{{AUR|ryzen-controller-bin}}}}<br />
* {{App|amdctl|amdctl is a command-line tool for under/over clocking/volting AMD CPUs, currently supporting AMD CPU families 10h, 11h, 12h, 15h, 16h, 17h and 19h.|https://github.com/kevinlekiller/amdctl/|{{AUR|amdctl}}}}<br />
* {{App|ZenStates-Linux|ZenStates is a command-line tool to adjust the clock speed and voltage. A detailed setup example is given in [https://forum.level1techs.com/t/overclock-your-ryzen-cpu-from-linux/126025 Level1Techs] forum.|https://github.com/r4m0n/ZenStates-Linux|{{AUR|zenstates-git}}}}<br />
<br />
== Compiling a kernel ==<br />
<br />
See [[Gentoo:Ryzen#Kernel]] on enabling Ryzen support.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Random reboots ===<br />
<br />
See [[Gentoo:Ryzen#Random_reboots_with_mce_events]] if you are experiencing random reboots.<br />
<br />
With Ryzen 5, particularly the enthusiast models of 5950X and 5900X there seem to be some slight instability issues under Linux, related possibly to the 5.11+ kernel, as shown by [https://bugzilla.kernel.org/show_bug.cgi?id=212087 this kernel bug]. After investigating and reading reports on the Internet I discovered that out of the box, Windows seems to run the CPUs at higher voltage and lower peak frequencies, compared to the stock linux kernel, which depending on your draw from the silicone lottery could cause a host of random application crashes or hardware errors that lead to reboots. You will recognise those by dmesg logs that look like:<br />
<br />
kernel: mce: [Hardware Error]: Machine check events logged<br />
kernel: mce: [Hardware Error]: CPU 22: Machine Check: 0 Bank 1: bc800800060c0859<br />
lightbringer kernel: mce: [Hardware Error]: TSC 0 ADDR 7ea8f5b00 MISC d012000000000000 IPID 100b000000000 <br />
lightbringer kernel: mce: [Hardware Error]: PROCESSOR 2:a20f10 TIME 1636645367 SOCKET 0 APIC d microcode a201016<br />
<br />
The CPU ID and the Processor number may vary. To solve this problem you need to supply higher voltage to your CPU so that it is stable when running at peak frequencies. The easiest way to achieve this is to use the AMD curve optimiser which is accessible via your motherboard's bios. Access it and put a positive offset of 4 points, which will increase the voltage your CPU is getting at higher loads. It will limit overclocking potential due to higher heat dissipation requirements, but it will run stable. For more details check [https://community.amd.com/t5/processors/ryzen-5900x-system-constantly-crashing-restarting-whea-logger-id/td-p/423321/page/84 this forum post]. When I did this for my 5950X, my processor stabilised and the frequency and voltage ranges were more similar to those observed under windows.<br />
<br />
=== Screen-tearing (APU) ===<br />
<br />
If you are using [[Xorg]] and are experiencing screen-tearing, enabling the {{ic|"TearFree"}} option will fix the problem.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-amdgpu.conf|<br />
Section "Device"<br />
Identifier "AMD"<br />
Driver "amdgpu"<br />
Option "TearFree" "true"<br />
EndSection<br />
}}<br />
<br />
{{Note| {{ic|"TearFree"}} is '''not''' Vsync.}}<br />
<br />
=== Soft lock freezing ===<br />
<br />
This bug is well known and is being discussed on [https://bugzilla.kernel.org/show_bug.cgi?id=196683 bugzilla] and [https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1690085 launchpad]. While the solution is not the same in all cases, [https://bugs.launchpad.net/linux/+bug/1690085/comments/69 this] one helped some users. Add the output of this command {{ic|1=<nowiki/>echo rcu_nocbs=0-$(($(nproc)-1))}} as a kernel parameter where the command {{ic|nproc}} just prints your CPU's threads. For this option to be applied, you need a compiled kernel with option {{ic|CONFIG_RCU_NOCB_CPU}} which {{Pkg|linux}} is not.<br />
<br />
A different cause for the freezes is the power saving management indicated by c-states. The maximum power saving state c6 can cause problems. Adding the kernel parameter {{ic|1=<nowiki/>processor.max_cstate=5}} helped in some cases but other users reported that the option is not applied and the c6 state is still entered. For them, this package {{AUR|disable-c6-systemd}} helped. Before using it, {{ic|modprobe msr}} needs to be run in order to activate that kernel module.<br />
<br />
Some laptops with Ryzen CPUs such as the HP Envy x360 15-bq100na may experience CPU soft locks which result in a frozen system. These can be avoided with the kernel parameter "idle=nomwait" added.<br />
<br />
In some cases, kernel parameter "pci=nomsi" fixes the issue.<br />
<br />
=== Freeze on shutdown, reboot and suspend ===<br />
<br />
{{Note|With the latest AGESA firmware version 1.2.0.2 this problem might no longer occur.}}<br />
<br />
This seems to be related to the C6 c-state, that does not seem to be well supported (if at all) in Linux.<br />
<br />
To fix this issue, go into your BIOS settings for your motherboard and search for an option labeled something like this: "Power idle control". Change its value to "Typical current idle". Note that these names are dependent on what the motherboard manufacturer calls them, so they may be a little different in your particular case.<br />
<br />
Other less ideal solutions include disabling c-states in the BIOS or adding {{ic|1=processor.max_cstate=1}} to your kernel command line arguments.<br />
<br />
== See also ==<br />
<br />
* [[Gentoo:Ryzen]]</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732293QEMU/Guest graphics acceleration2022-06-10T08:21:31Z<p>FoXy: TIP for Ryzen</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's alternative) for simplifying the GPU Virtualization. It support Intel (Intel GVT-g, SR-IOV), NVIDIA (Nvidia VGPU, SR-IOV) and AMD (AMD SR-IOV).<br />
You have to create YAML configurations for each virtual machine. Currently Intel and NVIDIA GPUs are tested, with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles Wiki].<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME (''L''IME ''I''s ''M''ediated ''E''mulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that through YAML configuration.<br />
<br />
{{Tip|In case you have Ryzen CPU, You have to enable ignore_msrs to avoid Windows BSOD.}}<br />
<br />
==== NVIDIA vGPU ====<br />
<br />
By default NVIDIA disabled the vGPU for consumer series (if you own an enterprise card [https://documentation.suse.com/sles/15-SP3/html/SLES-all/article-nvidia-vgpu.html go ahead]). However you can [https://krutavshah.github.io/GPU_Virtualization-Wiki/overview.html unlock VGPU] for your consumer card.<br />
<br />
You will also need a [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU license], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds].<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually setup a Windows 10 guest with NVIDIA VGPU.<br />
<br />
==== SR-IOV ====<br />
<br />
''S''ingle ''R''oot ''I/O'' ''V''irtualization is under development by Intel and NVIDIA New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (Broadwell and newer). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732030QEMU/Guest graphics acceleration2022-06-08T15:24:33Z<p>FoXy: Wiki for Enterprise Cards.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series(if you own enterprise card [https://documentation.suse.com/sles/15-SP3/html/SLES-all/article-nvidia-vgpu.html go ahead]). However You can [https://krutavshah.github.io/GPU_Virtualization-Wiki/overview.html Unlock VGPU] of your consumer card.<br />
<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.<br />
<br />
==== SR-IOV ====<br />
Single Root I/O Virtualization is under development by Intel and Nvidia New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732029QEMU/Guest graphics acceleration2022-06-08T14:18:33Z<p>FoXy: Broken link fix.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU], Their <br />
[https://krutavshah.github.io/GPU_Virtualization-Wiki/overview.html WIKI].<br />
<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.<br />
<br />
==== SR-IOV ====<br />
Single Root I/O Virtualization is under development by Intel and Nvidia New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732028QEMU/Guest graphics acceleration2022-06-08T14:16:55Z<p>FoXy: Add VGPU Wiki Link.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU], Their [https://krutavshah.github.io/GPU_VirtualizationWiki/overview.html#system-requirements WIKI].<br />
<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.<br />
<br />
==== SR-IOV ====<br />
Single Root I/O Virtualization is under development by Intel and Nvidia New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=732024QEMU/Guest graphics acceleration2022-06-08T13:32:41Z<p>FoXy: Fix Sectioning</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Single GPU Passthrough ====<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU].<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.<br />
<br />
==== SR-IOV ====<br />
Single Root I/O Virtualization is under development by Intel and Nvidia New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].<br />
<br />
==== Intel-specific iGVT-g extension ====<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.</div>FoXyhttps://wiki.archlinux.org/index.php?title=PCI_passthrough_via_OVMF&diff=732015PCI passthrough via OVMF2022-06-08T11:23:00Z<p>FoXy: add related article</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[ja:OVMF による PCI パススルー]]<br />
[[zh-hans:PCI passthrough via OVMF]]<br />
{{Related articles start}}<br />
{{Related|Intel GVT-g}}<br />
{{Related|PCI passthrough via OVMF/Examples}}<br />
{{Related|QEMU/Guest graphics acceleration}}<br />
{{Related articles end}}<br />
<br />
The Open Virtual Machine Firmware ([https://github.com/tianocore/tianocore.github.io/wiki/OVMF OVMF]) is a project to enable UEFI support for virtual machines. Starting with Linux 3.9 and recent versions of [[QEMU]], it is now possible to passthrough a graphics card, offering the virtual machine native graphics performance which is useful for graphic-intensive tasks.<br />
<br />
Provided you have a desktop computer with a spare GPU you can dedicate to the host (be it an integrated GPU or an old OEM card, the brands do not even need to match) and that your hardware supports it (see [[#Prerequisites]]), it is possible to have a virtual machine of any OS with its own dedicated GPU and near-native performance. For more information on techniques see the background [https://www.linux-kvm.org/images/b/b3/01x09b-VFIOandYou-small.pdf presentation (pdf)].<br />
<br />
== Prerequisites ==<br />
<br />
A VGA Passthrough relies on a number of technologies that are not ubiquitous as of today and might not be available on your hardware. You will not be able to do this on your machine unless the following requirements are met :<br />
<br />
* Your CPU must support hardware virtualization (for kvm) and IOMMU (for the passthrough itself)<br />
** [https://ark.intel.com/Search/FeatureFilter?productType=873&0_VTD=True List of compatible Intel CPUs (Intel VT-x and Intel VT-d)]<br />
** All AMD CPUs from the Bulldozer generation and up (including Zen) should be compatible.<br />
*** CPUs from the K10 generation (2007) do not have an IOMMU, so you '''need''' to have a motherboard with a [https://support.amd.com/TechDocs/43403.pdf#page=18 890FX] or [https://support.amd.com/TechDocs/48691.pdf#page=21 990FX] chipset to make it work, as those have their own IOMMU.<br />
* Your motherboard must also support IOMMU<br />
** Both the chipset and the BIOS must support it. It is not always easy to tell at a glance whether or not this is the case, but there is a fairly comprehensive list on the matter on the [https://wiki.xen.org/wiki/VTd_HowTo Xen wiki] as well as [[Wikipedia:List of IOMMU-supporting hardware]].<br />
* Your guest GPU ROM must support UEFI.<br />
** If you can find [https://www.techpowerup.com/vgabios/ any ROM in this list] that applies to your specific GPU and is said to support UEFI, you are generally in the clear. All GPUs from 2012 and later should support this, as Microsoft made UEFI a requirement for devices to be marketed as compatible with Windows 8.<br />
<br />
You will probably want to have a spare monitor or one with multiple input ports connected to different GPUs (the passthrough GPU will not display anything if there is no screen plugged in and using a VNC or Spice connection will not help your performance), as well as a mouse and a keyboard you can pass to your virtual machine. If anything goes wrong, you will at least have a way to control your host machine this way.<br />
<br />
== Setting up IOMMU ==<br />
<br />
{{Note|<br />
* IOMMU is a generic name for Intel VT-d and AMD-Vi.<br />
* VT-d stands for ''Intel Virtualization Technology for Directed I/O'' and should not be confused with VT-x ''Intel Virtualization Technology''. VT-x allows one hardware platform to function as multiple “virtual” platforms while VT-d improves security and reliability of the systems and also improves performance of I/O devices in virtualized environments.<br />
}}<br />
<br />
Using IOMMU opens to features like PCI passthrough and memory protection from faulty or malicious devices, see [[Wikipedia:Input-output memory management unit#Advantages]] and [https://www.quora.com/Memory-Management-computer-programming/Could-you-explain-IOMMU-in-plain-English Memory Management (computer programming): Could you explain IOMMU in plain English?].<br />
<br />
=== Enabling IOMMU ===<br />
<br />
Ensure that AMD-Vi/Intel VT-d is supported by the CPU and enabled in the BIOS settings. Both normally show up alongside other CPU features (meaning they could be in an overclocking-related menu) either with their actual names ("VT-d" or "AMD-Vi") or in more ambiguous terms such as "Virtualization technology", which may or may not be explained in the manual.<br />
<br />
Manually enable IOMMU support by setting the correct [[kernel parameter]] depending on the type of CPU in use:<br />
<br />
* For Intel CPUs (VT-d) set {{ic|1=intel_iommu=on}}. Since the kernel config option CONFIG_INTEL_IOMMU_DEFAULT_ON is not set in {{Pkg|linux}}.<br />
* For AMD CPUs (AMD-Vi), it is on if kernel detects IOMMU hardware support from BIOS.<br />
<br />
You should also append the {{ic|1=iommu=pt}} parameter. This will prevent Linux from touching devices which cannot be passed through.<br />
<br />
After rebooting, check [[dmesg]] to confirm that IOMMU has been correctly enabled:<br />
<br />
{{hc|# dmesg {{!}} grep -i -e DMAR -e IOMMU|<br />
[ 0.000000] ACPI: DMAR 0x00000000BDCB1CB0 0000B8 (v01 INTEL BDW 00000001 INTL 00000001)<br />
[ 0.000000] Intel-IOMMU: enabled<br />
[ 0.028879] dmar: IOMMU 0: reg_base_addr fed90000 ver 1:0 cap c0000020660462 ecap f0101a<br />
[ 0.028883] dmar: IOMMU 1: reg_base_addr fed91000 ver 1:0 cap d2008c20660462 ecap f010da<br />
[ 0.028950] IOAPIC id 8 under DRHD base 0xfed91000 IOMMU 1<br />
[ 0.536212] DMAR: No ATSR found<br />
[ 0.536229] IOMMU 0 0xfed90000: using Queued invalidation<br />
[ 0.536230] IOMMU 1 0xfed91000: using Queued invalidation<br />
[ 0.536231] IOMMU: Setting RMRR:<br />
[ 0.536241] IOMMU: Setting identity map for device 0000:00:02.0 [0xbf000000 - 0xcf1fffff]<br />
[ 0.537490] IOMMU: Setting identity map for device 0000:00:14.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537512] IOMMU: Setting identity map for device 0000:00:1a.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537530] IOMMU: Setting identity map for device 0000:00:1d.0 [0xbdea8000 - 0xbdeb6fff]<br />
[ 0.537543] IOMMU: Prepare 0-16MiB unity mapping for LPC<br />
[ 0.537549] IOMMU: Setting identity map for device 0000:00:1f.0 [0x0 - 0xffffff]<br />
[ 2.182790] [drm] DMAR active, disabling use of stolen memory<br />
}}<br />
<br />
=== Ensuring that the groups are valid ===<br />
<br />
The following script should allow you to see how your various PCI devices are mapped to IOMMU groups. If it does not return anything, you either have not enabled IOMMU support properly or your hardware does not support it.<br />
<br />
#!/bin/bash<br />
shopt -s nullglob<br />
for g in $(find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V); do<br />
echo "IOMMU Group ${g##*/}:"<br />
for d in $g/devices/*; do<br />
echo -e "\t$(lspci -nns ${d##*/})"<br />
done;<br />
done;<br />
<br />
Example output:<br />
<br />
IOMMU Group 1:<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
IOMMU Group 2:<br />
00:14.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:0e31] (rev 04)<br />
IOMMU Group 4:<br />
00:1a.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:0e2d] (rev 04)<br />
IOMMU Group 10:<br />
00:1d.0 USB controller: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:0e26] (rev 04)<br />
IOMMU Group 13:<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
<br />
An IOMMU group is the smallest set of physical devices that can be passed to a virtual machine. For instance, in the example above, both the GPU in 06:00.0 and its audio controller in 6:00.1 belong to IOMMU group 13 and can only be passed together. The frontal USB controller, however, has its own group (group 2) which is separate from both the USB expansion controller (group 10) and the rear USB controller (group 4), meaning that [[#USB controller|any of them could be passed to a virtual machine without affecting the others]].<br />
<br />
=== Gotchas ===<br />
<br />
==== Plugging your guest GPU in an unisolated CPU-based PCIe slot ====<br />
<br />
Not all PCI-E slots are the same. Most motherboards have PCIe slots provided by both the CPU and the PCH. Depending on your CPU, it is possible that your processor-based PCIe slot does not support isolation properly, in which case the PCI slot itself will appear to be grouped with the device that is connected to it.<br />
<br />
IOMMU Group 1:<br />
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port (rev 09)<br />
01:00.0 VGA compatible controller: NVIDIA Corporation GM107 [GeForce GTX 750] (rev a2)<br />
01:00.1 Audio device: NVIDIA Corporation Device 0fbc (rev a1)<br />
<br />
This is fine so long as only your guest GPU is included in here, such as above. Depending on what is plugged in to your other PCIe slots and whether they are allocated to your CPU or your PCH, you may find yourself with additional devices within the same group, which would force you to pass those as well. If you are ok with passing everything that is in there to your virtual machine, you are free to continue. Otherwise, you will either need to try and plug your GPU in your other PCIe slots (if you have any) and see if those provide isolation from the rest or to install the ACS override patch, which comes with its own drawbacks. See [[#Bypassing the IOMMU groups (ACS override patch)]] for more information.<br />
<br />
{{Note|If they are grouped with other devices in this manner, pci root ports and bridges should neither be bound to vfio at boot, nor be added to the virtual machine.}}<br />
<br />
== Isolating the GPU ==<br />
<br />
In order to assign a device to a virtual machine, this device and all those sharing the same IOMMU group must have their driver replaced by a stub driver or a VFIO driver in order to prevent the host machine from interacting with them. In the case of most devices, this can be done on the fly right before the virtual machine starts.<br />
<br />
However, due to their size and complexity, GPU drivers do not tend to support dynamic rebinding very well, so you cannot just have some GPU you use on the host be transparently passed to a virtual machine without having both drivers conflict with each other. Because of this, it is generally advised to bind those placeholder drivers manually before starting the virtual machine, in order to stop other drivers from attempting to claim it.<br />
<br />
The following section details how to configure a GPU so those placeholder drivers are bound early during the boot process, which makes said device inactive until a virtual machine claims it or the driver is switched back. This is the preferred method, considering it has less caveats than switching drivers once the system is fully online.<br />
<br />
{{Warning|Once you reboot after this procedure, whatever GPU you have configured will no longer be usable on the host until you reverse the manipulation. Make sure the GPU you intend to use on the host is properly configured before doing this - your motherboard should be set to display using the host GPU.}}<br />
<br />
Starting with Linux 4.1, the kernel includes vfio-pci. This is a VFIO driver, meaning it fulfills the same role as pci-stub did, but it can also control devices to an extent, such as by switching them into their D3 state when they are not in use.<br />
<br />
=== Binding vfio-pci via device ID ===<br />
<br />
Vfio-pci normally targets PCI devices by ID, meaning you only need to specify the IDs of the devices you intend to passthrough. For the following IOMMU group, you would want to bind vfio-pci with {{ic|10de:13c2}} and {{ic|10de:0fbb}}, which will be used as example values for the rest of this section.<br />
<br />
IOMMU Group 13:<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)}}<br />
<br />
{{Note|<br />
* You cannot specify which device to isolate using vendor-device ID pairs if the host GPU and the guest GPU share the same pair (i.e : if both are the same model). If this is your case, read [[#Using identical guest and host GPUs]] instead.<br />
* If, as noted in [[#Plugging your guest GPU in an unisolated CPU-based PCIe slot]], your pci root port is part of your IOMMU group, you '''must not''' pass its ID to {{ic|vfio-pci}}, as it needs to remain attached to the host to function properly. Any other device within that group, however, should be left for {{ic|vfio-pci}} to bind with.<br />
* Binding the audio device ({{ic|10de:0fbb}} in above's example) is optional. Libvirt is able to unbind it from the {{ic|snd_hda_intel}} driver on its own.<br />
}}<br />
<br />
Two methods exist for providing the device IDs. Specifying them via [[kernel parameters]] has the advantage of being able to easily edit, remove, or undo any breaking changes via your boot loader:<br />
<br />
vfio-pci.ids=10de:13c2,10de:0fbb<br />
<br />
Alternatively, the IDs may be added to a modprobe conf file. Since these conf files are embedded in the initramfs image, any changes require regenerating a new image each time:<br />
<br />
{{hc|/etc/modprobe.d/vfio.conf|2=<br />
options vfio-pci ids=10de:13c2,10de:0fbb<br />
}}<br />
<br />
=== Loading vfio-pci early ===<br />
<br />
==== mkinitcpio ====<br />
<br />
Since Arch's {{Pkg|linux}} has vfio-pci built as a module, we need to force it to load early before the graphics drivers have a chance to bind to the card. To ensure that, add {{ic|vfio_pci}}, {{ic|vfio}}, {{ic|vfio_iommu_type1}}, and {{ic|vfio_virqfd}} to [[mkinitcpio]]:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
MODULES=(... vfio_pci vfio vfio_iommu_type1 vfio_virqfd ...)<br />
}}<br />
<br />
{{Note|<nowiki/><br />
* If you also have another driver loaded this way for [[Kernel mode setting#Early KMS start|early modesetting]] (such as {{ic|nouveau}}, {{ic|radeon}}, {{ic|amdgpu}}, {{ic|i915}}, etc.), all of the aforementioned VFIO modules must precede it.<br />
* If you are modesetting the {{ic|nvidia}} driver, the {{ic|vfio-pci.ids}} must be embedded in the initramfs image. If given via kernel arguments, they will be read too late to take effect. Follow the instructions in [[#Binding vfio-pci via device ID]] for adding the ids to a modprobe conf file.<br />
}}<br />
<br />
Also, ensure that the modconf hook is included in the HOOKS list of {{ic|mkinitcpio.conf}}:<br />
<br />
{{hc|/etc/mkinitcpio.conf|2=<br />
HOOKS=(... modconf ...)<br />
}}<br />
<br />
Since new modules have been added to the initramfs configuration, you must [[regenerate the initramfs]].<br />
<br />
==== booster ====<br />
<br />
Similar to mkinitcpio you need to specify modules to load early:<br />
{{hc|/etc/booster.yaml|2=<br />
modules_force_load: vfio_pci,vfio,vfio_iommu_type1,vfio_virqfd<br />
}}<br />
<br />
and then [[Booster#Regenerate_booster_images|regenerate the initramfs]].<br />
<br />
==== dracut ====<br />
<br />
dracut's early loading mechanism is configured via kernel parameters. To load vfio-pci early, add both the [[#Binding vfio-pci via device ID|device ids]] and the following line to your [[kernel parameters]]:<br />
<br />
rd.driver.pre=vfio_pci<br />
<br />
We also need to add all the vfio drivers to the initramfs. Add the following file to {{ic|/etc/dracut.conf.d}}:<br />
<br />
{{hc|10-vfio.conf|2=<br />
add_drivers+=" vfio_pci vfio vfio_iommu_type1 vfio_virqfd "<br />
}}<br />
<br />
As with mkinitcpio, you must regenerate the initramfs. See [[dracut]] for more details.<br />
<br />
=== Verifying that the configuration worked ===<br />
<br />
Reboot and verify that vfio-pci has loaded properly and that it is now bound to the right devices.<br />
<br />
{{hc|# dmesg {{!}} grep -i vfio|<br />
[ 0.329224] VFIO - User Level meta-driver version: 0.3<br />
[ 0.341372] vfio_pci: add [10de:13c2[ffff:ffff]] class 0x000000/00000000<br />
[ 0.354704] vfio_pci: add [10de:0fbb[ffff:ffff]] class 0x000000/00000000<br />
[ 2.061326] vfio-pci 0000:06:00.0: enabling device (0100 -> 0103)<br />
}}<br />
<br />
It is not necessary for all devices (or even expected device) from {{ic|vfio.conf}} to be in ''dmesg'' output.<br />
Even if a device does not appear, it might still be visible and usable in the guest virtual machine.<br />
<br />
{{hc|$ lspci -nnk -d 10de:13c2|<br />
06:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: nouveau nvidia<br />
}}<br />
<br />
{{hc|$ lspci -nnk -d 10de:0fbb|<br />
06:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
Kernel driver in use: vfio-pci<br />
Kernel modules: snd_hda_intel<br />
}}<br />
<br />
== Setting up an OVMF-based guest virtual machine ==<br />
<br />
OVMF is an open-source UEFI firmware for QEMU virtual machines. While it is possible to use SeaBIOS to get similar results to an actual PCI passthrough, the setup process is different and it is generally preferable to use the EFI method if your hardware supports it.<br />
<br />
=== Configuring libvirt ===<br />
<br />
[[Libvirt]] is a wrapper for a number of virtualization utilities that greatly simplifies the configuration and deployment process of virtual machines. In the case of KVM and QEMU, the frontend it provides allows us to avoid dealing with the permissions for QEMU and make it easier to add and remove various devices on a live virtual machine. Its status as a wrapper, however, means that it might not always support all of the latest qemu features, which could end up requiring the use of a wrapper script to provide some extra arguments to QEMU.<br />
<br />
{{Accuracy|{{Pkg|libvirt}} depends on ''ebtables'', not {{pkg|iptables-nft}} which is just a translation layer.}}<br />
<br />
Install {{Pkg|qemu-desktop}}, {{Pkg|libvirt}}, {{Pkg|edk2-ovmf}}, and {{Pkg|virt-manager}}. For the default network connection, {{pkg|iptables-nft}} and {{pkg|dnsmasq}} are required.<br />
<br />
You can now [[enable]] and [[start]] {{ic|libvirtd.service}} and its logging component {{ic|virtlogd.socket}}.<br />
<br />
You may also need to [https://wiki.libvirt.org/page/Networking#NAT_forwarding_.28aka_.22virtual_networks.22.29 activate the default libvirt network]:<br />
# virsh net-autostart default<br />
# virsh net-start default<br />
<br />
{{Note|The default libvirt network will only be listed if the virsh command is run as root.}}<br />
<br />
=== Setting up the guest OS ===<br />
<br />
The process of setting up a virtual machine using {{ic|virt-manager}} is mostly self-explanatory, as most of the process comes with fairly comprehensive on-screen instructions.<br />
<br />
However, you should pay special attention to the following steps:<br />
<br />
* When the virtual machine creation wizard asks you to name your virtual machine (final step before clicking "Finish"), check the "Customize before install" checkbox.<br />
* In the "Overview" section, [https://i.imgur.com/73r2ctM.png set your firmware to "UEFI"]. If the option is grayed out, make sure that:<br />
** Your hypervisor is running as a system session and not a user session. This can be verified [https://i.ibb.co/N1XZCdp/Deepin-Screenshot-select-area-20190125113216.png by clicking, then hovering] over the session in virt-manager. If you are accidentally running it as a user session, you must open a new connection by clicking "File" > "Add Connection..", then select the option from the drop-down menu station "QEMU/KVM" and not "QEMU/KVM user session".<br />
* In the "CPUs" section, change your CPU model to "host-passthrough". If it is not in the list, you will have to either type it by hand or by using {{ic|virt-xml ''vmname'' --edit --cpu host-passthrough}}. This will ensure that your CPU is detected properly, since it causes libvirt to expose your CPU capabilities exactly as they are instead of only those it recognizes (which is the preferred default behavior to make CPU behavior easier to reproduce). Without it, some applications may complain about your CPU being of an unknown model.<br />
* If you want to minimize IO overhead, it is easier to setup [[#Virtio disk]] before installing<br />
<br />
The rest of the installation process will take place as normal using a standard QXL video adapter running in a window. At this point, there is no need to install additional drivers for the rest of the virtual devices, since most of them will be removed later on. Once the guest OS is done installing, simply turn off the virtual machine. It is possible you will be dropped into the UEFI menu instead of starting the installation upon powering your virtual machine for the first time. Sometimes the correct ISO file was not automatically detected and you will need to manually specify the drive to boot. By typing exit and navigating to "boot manager" you will enter a menu that allows you to choose between devices.<br />
<br />
=== Attaching the PCI devices ===<br />
<br />
With the installation done, it is now possible to edit the hardware details in libvirt and remove virtual integration devices, such as the spice channel and virtual display, the QXL video adapter, the emulated mouse and keyboard and the USB tablet device. For example, remove the following sections from your XML file:<br />
<br />
{{bc|1=<nowiki/><br />
<channel type="spicevmc"><br />
...<br />
</channel><br />
<input type="tablet" bus="usb"><br />
...<br />
</input><br />
<input type="mouse" bus="ps2"/><br />
<input type="keyboard" bus="ps2"/><br />
<graphics type="spice" autoport="yes"><br />
...<br />
</graphics><br />
<video><br />
<model type="qxl" .../><br />
...<br />
</video><br />
}}<br />
<br />
Since that leaves you with no input devices, you may want to bind a few USB host devices to your virtual machine as well, but remember to '''leave at least one mouse and/or keyboard assigned to your host''' in case something goes wrong with the guest. This may be done by using {{ic|Add Hardware > USB Host Device}}.<br />
<br />
At this point, it also becomes possible to attach the PCI device that was isolated earlier; simply click on "Add Hardware" and select the PCI Host Devices you want to passthrough. If everything went well, the screen plugged into your GPU should show the OVMF splash screen and your virtual machine should start up normally. From there, you can setup the drivers for the rest of your virtual machine.<br />
<br />
=== Video card driver virtualisation detection ===<br />
<br />
Video card drivers by AMD incorporate very basic virtual machine detection targeting Hyper-V extensions. Should this detection mechanism trigger, the drivers will refuse to run, resulting in a black screen.<br />
<br />
If this is the case, it is required to modify the reported Hyper-V vendor ID:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<features><br />
...<br />
<hyperv><br />
...<br />
<vendor_id state='on' value='randomid'/><br />
...<br />
</hyperv><br />
...<br />
</features><br />
...<br />
}}<br />
<br />
Nvidia guest drivers prior to version 465 exhibited a similar behaviour which resulted in a generic error 43 in the card's device manager status. Systems using these older drivers therefore also need the above modification. In addition, they also require hiding the KVM CPU leaf:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<features><br />
...<br />
<kvm><br />
<hidden state='on'/><br />
</kvm><br />
...<br />
</features><br />
...<br />
}}<br />
<br />
Note that the above steps do not equate 'hiding' the virtual machine from Windows or any drivers/programs running in the virtual machine. Also, various other issues not related to any detection mechanism referred to here can also trigger error 43.<br />
<br />
=== Passing keyboard/mouse via Evdev ===<br />
<br />
If you do not have a spare mouse or keyboard to dedicate to your guest, and you do not want to suffer from the video overhead of Spice, you can setup evdev to share them between your Linux host and your virtual machine.<br />
<br />
{{Note|Press both left and right '''Ctrl''' keys at the same time to swap control between the host and the guest.}}<br />
<br />
First, find your keyboard and mouse devices in {{ic|/dev/input/by-id/}}. Only devices with {{ic|event}} in their name are valid. You may find multiple devices associated to your mouse or keyboard, so try {{ic|cat /dev/input/by-id/''device_id''}} and either hit some keys on the keyboard or wiggle your mouse to see if input comes through, if so you have got the right device. Now add those devices to your configuration:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<devices><br />
...<br />
<input type='evdev'><br />
<source dev='/dev/input/by-id/MOUSE_NAME'/><br />
</input><br />
<input type='evdev'><br />
<source dev='/dev/input/by-id/KEYBOARD_NAME' grab='all' repeat='on'/><br />
</input><br />
...<br />
</devices><br />
}}<br />
<br />
Replace {{ic|MOUSE_NAME}} and {{ic|KEYBOARD_NAME}} with your device path. Now you can startup the guest OS and test swapping control of your mouse and keyboard between the host and guest by pressing both the left and right control keys at the same time.<br />
<br />
You may also consider switching from PS/2 to Virtio inputs in your configurations. Add these two devices:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<input type='mouse' bus='virtio'/><br />
<input type='keyboard' bus='virtio'/><br />
...<br />
}}<br />
<br />
The virtio input devices will not actually be used until the guest drivers are installed. QEMU will continue to send key events to the PS2 devices until it detects the virtio input driver initialization. Note that the PS2 devices cannot be removed as they are an internal function of the emulated Q35/440FX chipsets.<br />
<br />
=== Gotchas ===<br />
<br />
==== Using a non-EFI image on an OVMF-based virtual machine ====<br />
<br />
The OVMF firmware does not support booting off non-EFI mediums. If the installation process drops you in a UEFI shell right after booting, you may have an invalid EFI boot media. Try using an alternate Linux/Windows image to determine if you have an invalid media.<br />
<br />
== Performance tuning ==<br />
<br />
Most use cases for PCI passthroughs relate to performance-intensive domains such as video games and GPU-accelerated tasks. While a PCI passthrough on its own is a step towards reaching native performance, there are still a few ajustments on the host and guest to get the most out of your virtual machine.<br />
<br />
=== CPU pinning ===<br />
<br />
The default behavior for KVM guests is to run operations coming from the guest as a number of threads representing virtual processors. Those threads are managed by the Linux scheduler like any other thread and are dispatched to any available CPU cores based on niceness and priority queues. As such, the local CPU cache benefits (L1/L2/L3) are lost each time the host scheduler reschedules the virtual CPU thread on a different physical CPU. This can noticeably harm performance on the guest. CPU pinning aims to resolve this by limiting which physical CPUs the virtual CPUs are allowed to run on. The ideal setup is a one to one mapping such that the virtual CPU cores match physical CPU cores while taking hyperthreading/SMT into account.<br />
<br />
In addition, in some modern CPUs, groups of cores often share a common L3 cache. In such cases, care should be taken to pin exactly those physical cores that share a particular L3. Failing to do so might lead to cache evictions which could result in microstutters.<br />
<br />
{{Note|For certain users enabling CPU pinning may introduce stuttering and short hangs, especially with the MuQSS scheduler (present in linux-ck and linux-zen kernels). You might want to try disabling pinning first if you experience similar issues, which effectively trades maximum performance for responsiveness at all times.}}<br />
<br />
==== CPU topology ====<br />
<br />
Most modern CPUs support hardware multitasking, also known as hyper-threading on Intel CPUs or SMT on AMD CPUs. Hyper-threading/SMT is simply a very efficient way of running two threads on one CPU core at any given time. You will want to take into consideration that the CPU pinning you choose will greatly depend on what you do with your host while your virtual machine is running.<br />
<br />
To find the topology for your CPU run {{ic|1=lscpu -e}}:<br />
<br />
{{Note|Pay special attention to the 4th column '''"CORE"''' as this shows the association of the Physical/Logical CPU cores as well as the 8th column '''"L3"''' which shows which cores are connected to which L3 cache.}}<br />
<br />
{{ic|lscpu -e}} on a 6c/12t Ryzen 5 1600:<br />
<br />
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ<br />
0 0 0 0 0:0:0:0 yes 3800.0000 1550.0000<br />
1 0 0 0 0:0:0:0 yes 3800.0000 1550.0000<br />
2 0 0 1 1:1:1:0 yes 3800.0000 1550.0000<br />
3 0 0 1 1:1:1:0 yes 3800.0000 1550.0000<br />
4 0 0 2 2:2:2:0 yes 3800.0000 1550.0000<br />
5 0 0 2 2:2:2:0 yes 3800.0000 1550.0000<br />
6 0 0 3 3:3:3:1 yes 3800.0000 1550.0000<br />
7 0 0 3 3:3:3:1 yes 3800.0000 1550.0000<br />
8 0 0 4 4:4:4:1 yes 3800.0000 1550.0000<br />
9 0 0 4 4:4:4:1 yes 3800.0000 1550.0000<br />
10 0 0 5 5:5:5:1 yes 3800.0000 1550.0000<br />
11 0 0 5 5:5:5:1 yes 3800.0000 1550.0000<br />
<br />
Considering the L3 mapping, it is recommended to pin and isolate CPUs 6–11. Pinning and isolating fewer than these (e.g. 8–11) would result in the host system making use of the L3 cache in core 6 and 7 which would eventually lead to cache evictions and therefore bad performance.<br />
<br />
{{Note|Ryzen 3000 ComboPi AGESA changes topology to match Intel example, even on prior generation CPUs. Above valid only on older AGESA.}}<br />
<br />
{{ic|lscpu -e}} on a 6c/12t Intel 8700k:<br />
<br />
CPU NODE SOCKET CORE L1d:L1i:L2:L3 ONLINE MAXMHZ MINMHZ<br />
0 0 0 0 0:0:0:0 yes 4600.0000 800.0000<br />
1 0 0 1 1:1:1:0 yes 4600.0000 800.0000<br />
2 0 0 2 2:2:2:0 yes 4600.0000 800.0000<br />
3 0 0 3 3:3:3:0 yes 4600.0000 800.0000<br />
4 0 0 4 4:4:4:0 yes 4600.0000 800.0000<br />
5 0 0 5 5:5:5:0 yes 4600.0000 800.0000<br />
6 0 0 0 0:0:0:0 yes 4600.0000 800.0000<br />
7 0 0 1 1:1:1:0 yes 4600.0000 800.0000<br />
8 0 0 2 2:2:2:0 yes 4600.0000 800.0000<br />
9 0 0 3 3:3:3:0 yes 4600.0000 800.0000<br />
10 0 0 4 4:4:4:0 yes 4600.0000 800.0000<br />
11 0 0 5 5:5:5:0 yes 4600.0000 800.0000<br />
<br />
Since all cores are connected to the same L3 in this example, it does not matter much how many CPUs you pin and isolate as long as you do it in the proper thread pairs. For instance, (0, 6), (1, 7), etc.<br />
<br />
As we see above, with AMD '''Core 0''' is sequential with '''CPU 0 & 1''', whereas Intel places '''Core 0''' on '''CPU 0 & 6'''.<br />
<br />
{{Tip|You can view your systems topology in diagram form, which may help some users. If you have {{Pkg|hwloc}} installed, run {{ic|lstopo}} to generate a helpful image of your CPU/Thread groupings.}}<br />
<br />
If you do not need all cores for the guest, it would then be preferable to leave at the very least one core for the host. Choosing which cores one to use for the host or guest should be based on the specific hardware characteristics of your CPU, however '''Core 0''' is a good choice for the host in most cases. If any cores are reserved for the host, it is recommended to pin the emulator and iothreads, if used, to the host cores rather than the VCPUs. This may improve performance and reduce latency for the guest since those threads will not pollute the cache or contend for scheduling with the guest VCPU threads. If all cores are passed to the guest, there is no need or benefit to pinning the emulator or iothreads.<br />
<br />
==== XML examples ====<br />
<br />
{{Note|Do not use the '''iothread''' lines from the XML examples shown below if you have not added an '''iothread''' to your disk controller. '''iothread''''s only work on '''virtio-scsi''' or '''virtio-blk''' devices.}}<br />
<br />
===== 4c/1t CPU w/o Hyperthreading Example =====<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<vcpu placement='static'>4</vcpu><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='0'/><br />
<vcpupin vcpu='1' cpuset='1'/><br />
<vcpupin vcpu='2' cpuset='2'/><br />
<vcpupin vcpu='3' cpuset='3'/><br />
</cputune><br />
...<br />
}}<br />
<br />
===== 4c/2t Intel/AMD CPU example (after ComboPI AGESA bios update) =====<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<vcpu placement='static'>8</vcpu><br />
<iothreads>1</iothreads><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='2'/><br />
<vcpupin vcpu='1' cpuset='8'/><br />
<vcpupin vcpu='2' cpuset='3'/><br />
<vcpupin vcpu='3' cpuset='9'/><br />
<vcpupin vcpu='4' cpuset='4'/><br />
<vcpupin vcpu='5' cpuset='10'/><br />
<vcpupin vcpu='6' cpuset='5'/><br />
<vcpupin vcpu='7' cpuset='11'/><br />
<emulatorpin cpuset='0,6'/><br />
<iothreadpin iothread='1' cpuset='0,6'/><br />
</cputune><br />
...<br />
<cpu mode='host-passthrough'><br />
<topology sockets='1' cores='4' threads='2'/><br />
</cpu><br />
...<br />
}}<br />
<br />
===== 4c/2t AMD CPU example (Before ComboPi AGESA bios update) =====<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<vcpu placement='static'>8</vcpu><br />
<iothreads>1</iothreads><br />
<cputune><br />
<vcpupin vcpu='0' cpuset='2'/><br />
<vcpupin vcpu='1' cpuset='3'/><br />
<vcpupin vcpu='2' cpuset='4'/><br />
<vcpupin vcpu='3' cpuset='5'/><br />
<vcpupin vcpu='4' cpuset='6'/><br />
<vcpupin vcpu='5' cpuset='7'/><br />
<vcpupin vcpu='6' cpuset='8'/><br />
<vcpupin vcpu='7' cpuset='9'/><br />
<emulatorpin cpuset='0-1'/><br />
<iothreadpin iothread='1' cpuset='0-1'/><br />
</cputune><br />
...<br />
<cpu mode='host-passthrough'><br />
<topology sockets='1' cores='4' threads='2'/><br />
</cpu><br />
...<br />
}}<br />
<br />
{{Note|If further CPU isolation is needed, consider using the '''isolcpus''' kernel command-line parameter on the unused physical/logical cores.}}<br />
<br />
If you do not intend to be doing any computation-heavy work on the host (or even anything at all) at the same time as you would on the virtual machine, you may want to pin your virtual machine threads across all of your cores, so that the virtual machine can fully take advantage of the spare CPU time the host has available. Be aware that pinning all physical and logical cores of your CPU could induce latency in the guest virtual machine.<br />
<br />
=== Huge memory pages ===<br />
<br />
When dealing with applications that require large amounts of memory, memory latency can become a problem since the more memory pages are being used, the more likely it is that this application will attempt to access information across multiple memory "pages", which is the base unit for memory allocation. Resolving the actual address of the memory page takes multiple steps, and so CPUs normally cache information on recently used memory pages to make subsequent uses on the same pages faster. Applications using large amounts of memory run into a problem where, for instance, a virtual machine uses 4 GiB of memory divided into 4 KiB pages (which is the default size for normal pages) for a total of 1.04 million pages, meaning that such cache misses can become extremely frequent and greatly increase memory latency. Huge pages exist to mitigate this issue by giving larger individual pages to those applications, increasing the odds that multiple operations will target the same page in succession.<br />
<br />
==== Transparent huge pages ====<br />
<br />
QEMU will use 2MiB sized transparent huge pages automatically without any explicit configuration in QEMU or Libvirt, subject to some important caveats. When using VFIO the pages are locked in at boot time and transparent huge pages are allocated up front when the virtual machine first boots. If the kernel memory is highly fragmented, or the virtual machine is using a majority of the remaining free memory, it is likely that the kernel will not have enough 2MiB pages to fully satisfy the allocation. In such a case, it silently fails by using a mix of 2MiB and 4KiB pages. Since the pages are locked in VFIO mode, the kernel will not be able to convert those 4KiB pages to huge after the virtual machine starts either. The number of available 2MiB huge pages available to THP is the same as via the [[#Dynamic huge pages]] mechanism described in the following sections.<br />
<br />
To check how much memory THP is using globally:<br />
<br />
{{hc|$ grep AnonHugePages /proc/meminfo|<br />
AnonHugePages: 8091648 kB<br />
}}<br />
<br />
To check a specific QEMU instance. QEMU's PID must be substituted in the grep command:<br />
<br />
{{hc|$ grep -P 'AnonHugePages:\s+(?!0)\d+' /proc/[PID]/smaps|<br />
AnonHugePages: 8087552 kB<br />
}}<br />
<br />
In this example, the virtual machine was allocated 8388608KiB of memory, but only 8087552KiB was available via THP. The remaining 301056KiB are allocated as 4KiB pages. Aside from manually checking, there is no indication when partial allocations occur. As such, THP's effectiveness is very much dependent on the host system's memory fragmentation at the time of virtual machine startup. If this trade off is unacceptable or strict guarantees are required, [[#Static huge pages]] is recommended.<br />
<br />
Arch kernels have THP compiled in and enabled by default with {{ic|1=/sys/kernel/mm/transparent_hugepage/enabled}} set to {{ic|1=madvise}} mode.<br />
<br />
==== Static huge pages ====<br />
<br />
While transparent huge pages should work in the vast majority of cases, they can also be allocated statically during boot. This should only be needed to make use 1 GiB hugepages on machines that support it, since transparent huge pages normally only go up to 2 MiB.<br />
<br />
{{Warning|Static huge pages lock down the allocated amount of memory, making it unavailable for applications that are not configured to use them. Allocating 4 GiBs worth of huge pages on a machine with 8 GiB of memory will only leave you with 4 GiB of available memory on the host '''even when the virtual machine is not running'''.}}<br />
<br />
{{Note|The described procedures have some drawbacks but will not necessarily net you a great performance advantage. According to [https://developers.redhat.com/blog/2021/04/27/benchmarking-transparent-versus-1gib-static-huge-page-performance-in-linux-virtual-machines#benchmarks Red Hat benchmarks], you should not expect more than a 2% performance gain from this over [[#Transparent huge pages]].}}<br />
<br />
To allocate huge pages at boot, one must simply specify the desired amount on their kernel command line with {{ic|1=hugepages=''x''}}. For instance, reserving 1024 pages with {{ic|1=hugepages=1024}} and the default size of 2048 KiB per huge page creates 2 GiB worth of memory for the virtual machine to use.<br />
<br />
If supported by CPU page size could be set manually. 1 GiB huge page support could be verified by {{ic|grep pdpe1gb /proc/cpuinfo}}. Setting 1 GiB huge page size via kernel parameters : {{ic|1=default_hugepagesz=1G hugepagesz=1G hugepages=X}}.<br />
<br />
Also, since static huge pages can only be used by applications that specifically request it, you must add this section in your libvirt domain configuration to allow kvm to benefit from them :<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<memoryBacking><br />
<hugepages/><br />
</memoryBacking><br />
...<br />
}}<br />
<br />
==== Dynamic huge pages ====<br />
<br />
{{Accuracy|Need futher testing if this variant as effective as static one}}<br />
<br />
Hugepages could be allocated manually via {{ic|vm.nr_overcommit_hugepages}} [[sysctl]] parameter.<br />
<br />
{{hc|/etc/sysctl.d/10-kvm.conf|2=<br />
vm.nr_hugepages = 0<br />
vm.nr_overcommit_hugepages = ''num''<br />
}}<br />
<br />
Where {{ic|''num''}} - is the number of huge pages, which default size if 2 MiB.<br />
Pages will be automatically allocated, and freed after the virtual machine stops.<br />
<br />
More manual way:<br />
<br />
# echo ''num'' > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages<br />
# echo ''num'' > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages<br />
<br />
For 2 MiB and 1 GiB page size respectively.<br />
And they should be manually freed in the same way.<br />
<br />
It is hardly recommended to drop caches, compact memory and wait a couple of seconds before starting the virtual machine, as there could be not enough free contiguous memory for required huge pages blocks. Especially after some uptime of the host system.<br />
<br />
# echo 3 > /proc/sys/vm/drop_caches<br />
# echo 1 > /proc/sys/vm/compact_memory<br />
<br />
Theoretically, 1 GiB pages works as 2 MiB. But practically - no guaranteed way was found to get contiguous 1 GiB memory blocks. Each consequent request of 1 GiB blocks lead to lesser and lesser dynamically allocated count.<br />
<br />
=== CPU frequency governor ===<br />
<br />
Depending on the way your [[CPU frequency scaling|CPU governor]] is configured, the virtual machine threads may not hit the CPU load thresholds for the frequency to ramp up. Indeed, KVM cannot actually change the CPU frequency on its own, which can be a problem if it does not scale up with vCPU usage as it would result in underwhelming performance. An easy way to see if it behaves correctly is to check if the frequency reported by {{ic|watch lscpu}} goes up when running a CPU-intensive task on the guest. If you are indeed experiencing stutter and the frequency does not go up to reach its reported maximum, it may be due to [https://lime-technology.com/forum/index.php?topic=46664.msg447678#msg447678 cpu scaling being controlled by the host OS]. In this case, try setting all cores to maximum frequency to see if this improves performance. Note that if you are using a modern intel chip with the default pstate driver, cpupower commands will be [[CPU frequency scaling#CPU frequency driver|ineffective]], so monitor {{ic|/proc/cpuinfo}} to make sure your cpu is actually at max frequency.<br />
<br />
{{Warning|Depending on your processor, it might actually be detrimental to performance to force the whole CPU to run at full frequency at all times. For instance, modern AMD processors (Zen 2 and Zen 3) depend heavily on being able to scale individual cores in separate core complexes for optimal thermal distribution. If the whole CPU is running at full frequency at all times, there is less headroom for invididual cores to clock high, resulting in worse performance for processes that are not heavily multithreaded such as games. This should be benchmarked in your virtual machine.}}<br />
<br />
=== Isolating pinned CPUs ===<br />
<br />
CPU pinning by itself will not prevent other host processes from running on the pinned CPUs. Properly isolating the pinned CPUs can reduce latency in the guest virtual machine.<br />
<br />
==== With isolcpus kernel parameter ====<br />
<br />
In this example, let us assume you are using CPUs 4-7.<br />
Use the [[kernel parameters]] {{ic|isolcpus nohz_full}} to completely isolate the CPUs from the kernel. For example:<br />
<br />
isolcpus=4-7 nohz_full=4-7<br />
<br />
Then, run {{ic|qemu-system-x86_64}} with taskset and chrt:<br />
<br />
# chrt -r 1 taskset -c 4-7 qemu-system-x86_64 ...<br />
<br />
The {{ic|chrt}} command will ensure that the task scheduler will round-robin distribute work (otherwise it will all stay on the first cpu). For {{ic|taskset}}, the CPU numbers can be comma- and/or dash-separated, like "0,1,2,3" or "0-4" or "1,7-8,10" etc.<br />
<br />
See [https://web.archive.org/web/20210520061110/https://www.removeddit.com/r/VFIO/comments/6vgtpx/high_dpc_latency_and_audio_stuttering_on_windows/dm0sfto/ this Internet Archive copy of a Removeddit mirror of a Reddit thread] for more info. ([https://www.reddit.com/r/VFIO/comments/6vgtpx/high_dpc_latency_and_audio_stuttering_on_windows/dm0sfto/ The original thread] is worthless because of deleted comments, and Removeddit not longer works.)<br />
<br />
==== Dynamically isolating CPUs ====<br />
<br />
The isolcpus kernel parameter will permanently reserve CPU cores, even when the guest is not running. A more flexible alternative is to dynamically isolate CPUs when starting the guest. This can be achieved with the following alternatives:<br />
<br />
* {{AUR|cpuset-git}} ([https://www.redhat.com/archives/vfio-users/2016-September/msg00072.html vfio-users post], [https://rokups.github.io/#!pages/gaming-vm-performance.md blog post], [https://github.com/PassthroughPOST/VFIO-Tools/blob/master/libvirt_hooks/hooks/cset.sh example script])<br />
* {{AUR|vfio-isolate}}<br />
* systemd<br />
<br />
===== Example with systemd =====<br />
<br />
In this example, we assume a host with 12 CPUs, where CPUs 2-5 and 8-11 are [[#CPU pinning|pinned]] to the guest. Then run the following to isolate the host to CPUs 0, 1, 6, and 7:<br />
<br />
# systemctl set-property --runtime -- user.slice AllowedCPUs=0,1,6,7<br />
# systemctl set-property --runtime -- system.slice AllowedCPUs=0,1,6,7<br />
# systemctl set-property --runtime -- init.scope AllowedCPUs=0,1,6,7<br />
<br />
After shutting down the guest, run the following to reallocate all 12 CPUs back to the host:<br />
<br />
# systemctl set-property --runtime -- user.slice AllowedCPUs=0-11<br />
# systemctl set-property --runtime -- system.slice AllowedCPUs=0-11<br />
# systemctl set-property --runtime -- init.scope AllowedCPUs=0-11<br />
<br />
You can use a [https://libvirt.org/hooks.html libvirt hook] to automatically run the above at startup/shutdown of the guest like so:<br />
<br />
Create or edit {{ic|/etc/libvirt/hooks/qemu}} with the following content.<br />
<br />
{{hc|/etc/libvirt/hooks/qemu|2=<br />
#!/bin/sh<br />
<br />
command=$2<br />
<br />
if [ "$command" = "started" ]; then<br />
systemctl set-property --runtime -- system.slice AllowedCPUs=0,1,6,7<br />
systemctl set-property --runtime -- user.slice AllowedCPUs=0,1,6,7<br />
systemctl set-property --runtime -- init.scope AllowedCPUs=0,1,6,7<br />
elif [ "$command" = "release" ]; then<br />
systemctl set-property --runtime -- system.slice AllowedCPUs=0-11<br />
systemctl set-property --runtime -- user.slice AllowedCPUs=0-11<br />
systemctl set-property --runtime -- init.scope AllowedCPUs=0-11<br />
fi<br />
}}<br />
<br />
Afterwards make it [[executable]].<br />
<br />
[[Restart]] {{ic|libvirtd.service}} and then start your virtual machine. If you create some heavily multithreaded load on your host now, you should see that it keeps your chosen CPUs free from load while the virtual machine can still make use of it. You should also see those CPUs automatically getting fully used by your host once you terminate the virtual machine.<br />
<br />
More examples are contained in the following reddit threads: [https://www.reddit.com/r/VFIO/comments/ebe3l5/deprecated_isolcpus_workaround/fem8jgk] [https://www.reddit.com/r/VFIO/comments/gyem88/noob_question_how_to_isolate_cpu_cores/ftaqno1/] [https://www.reddit.com/r/VFIO/comments/ij25rg/splitting_ht_cores_between_host_and_vm/]<br />
<br />
Note that this requires systemd 244 or higher, and [[cgroups|cgroups v2]], which is now enabled by default.<br />
<br />
=== Improving performance on AMD CPUs ===<br />
<br />
Starting with QEMU 3.1 the TOPOEXT cpuid flag is disabled by default. In order to use hyperthreading (SMT) on AMD CPUs you need to manually enable it:<br />
<br />
<cpu mode='host-passthrough' check='none'><br />
<topology sockets='1' cores='4' threads='2'/><br />
<feature policy='require' name='topoext'/><br />
</cpu><br />
<br />
commit: https://git.qemu.org/?p=qemu.git;a=commit;h=7210a02c58572b2686a3a8d610c6628f87864aed<br />
<br />
=== Virtio disk ===<br />
<br />
{{Merge|QEMU#Installing virtio drivers|Off-topic.|section=Moving virtio disk section to QEMU}}<br />
<br />
The default disk types are SATA or IDE emulation out of the box. These controllers offer maximum compatibility but are not suited for efficient virtualization. Two accelerated models exist: {{ic|1=virtio-scsi}} for SCSI emulation and passthrough, or {{ic|1=virtio-blk}} for a more basic block device emulation.<br />
<br />
==== Drivers ====<br />
<br />
* Linux guests should support these out of the box on any modern kernel<br />
* macOS has {{ic|1=virtio-blk}} support starting in Mojave via {{ic|1=AppleVirtIO.kext}}<br />
* Windows needs the [https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/ Windows virtio drivers]. {{ic|1=virtio-scsi}} uses the {{ic|1=vioscsi}} driver. {{ic|1=virtio-blk}} uses the {{ic|1=viostor}} driver<br />
* Windows can be installed directly onto these disks by selecting 'load driver' on the installer disk selection menu. The windows iso and virtio driver iso should both be attached as regular SATA/IDE cdroms during the installation process<br />
* To switch boot disks to virtio on an existing Windows installation:<br />
** {{ic|1=virtio-blk}}: Add a temporary disk with bus {{ic|1=virtio}}, boot windows & load the driver for the disk, then shutdown and switch the boot disk disk bus to {{ic|1=virtio}}<br />
** {{ic|1=virtio-scsi}}: Add a scsi controller with model {{ic|1=virtio}}, boot windows & load the driver for the controller, then shutdown and switch the boot disk bus to {{ic|1=scsi}} (not virtio)<br />
<br />
==== Considerations ====<br />
<br />
* {{ic|1=virtio-scsi}} TRIM support is mature, all versions should support it. Traditionally, {{ic|1=virtio-scsi}} has been the preferred approach for this reason<br />
* {{ic|1=virtio-blk}} TRIM support is new, this requires requires qemu 4.0+, guest linux kernel 5.0+, guest windows drivers 0.1.173+<br />
* Thin provisioning works by enabling TRIM on a sparse image file: {{ic|1=discard='unmap'}}. Unused blocks will be freed and the disk usage will drop (works on both raw and qcow2). Actual on-disk size of a sparse image file may be checked with {{ic|1=du /path/to/disk.img}}<br />
* Thin provisioning can also work with block storage such as zfs zvols or thin lvm<br />
* Virt queue count will influence the number of threads inside the guest kernel used for IO processing, suggest using {{ic|1=queues='4'}} or more<br />
* Native mode ({{ic|1=io='native'}}) uses a single threaded model based on linux AIO, is a bit more CPU efficient but may have lower peak performance and does not allow host side caching to be used<br />
* Threaded mode ({{ic|1=io='threads'}}) will spawn dozens of threads on demand as the disk is used. This is less efficient but may perform better if there are enough host cores available to run them, and allows for host side caching to be used<br />
* Modern versions of libvirt will group the dynamic worker threads created when using threaded mode in with the iothread=1 cgroup for pinning purposes. Very old versions of libvirt left these in the emulator cgroup<br />
<br />
==== IO threads ====<br />
<br />
An IO thread is a dedicated thread for processing disk events, rather than using the main qemu emulator loop. This should not be confused with the worker threads spawned on demand with {{ic|1=io='threads'}}.<br />
<br />
* You can only use one iothread per disk controller. The thread must be assigned to a specific controller with {{ic|1=iothread='X'}} in the {{ic|1=<driver>}} tag. Furthermore, extra & unassigned iothreads will not be used and do nothing<br />
* In the case of {{ic|1=virtio-scsi}}, there is one controller for multiple scsi disks. The iothread is assigned on the controller: {{ic|1=<controller><driver iothread='X'>}}<br />
* In the case of {{ic|1=virtio-blk}}, each disk has its own controller. The iothread is assigned in the driver tag under the disk itself: {{ic|1=<disk><driver iothread='X'>}}<br />
* Since emulated disks incur a significant amount of CPU overhead, that can lead to vcpu stuttering under high disk load (especially high random IOPS). In this case it helps to pin the IO to different core(s) than your vcpus with {{ic|1=<iothreadpin>}}<br />
<br />
==== Examples with libvirt ====<br />
<br />
virtio-scsi + iothread + worker threads + host side writeback caching + full disk block device backend:<br />
<domain><br />
<devices><br />
<disk type='block' device='disk'><br />
<driver name='qemu' type='raw' cache='writeback' io='threads' discard='unmap'/><br />
<source dev='/dev/disk/by-id/ata-Samsung_SSD_840_EVO_1TB_S1D9NSAF206396F'/><br />
<target dev='sda' bus='scsi'/><br />
</disk><br />
<controller type='scsi' index='0' model='virtio-scsi'><br />
<driver iothread='1' queues='8'/><br />
</controller><br />
<br />
virtio-blk + iothread + native aio + no host caching + raw sparse image backend:<br />
<domain><br />
<devices><br />
<disk type='file' device='disk'><br />
<driver name='qemu' type='raw' cache='none' io='native' discard='unmap' iothread='1' queues='8'/><br />
<source file='/var/lib/libvirt/images/pool/win10.img'/><br />
<target dev='vda' bus='virtio'/><br />
</disk><br />
<br />
Creating the iothreads:<br />
<domain><br />
<iothreads>1</iothreads><br />
<br />
Pinning iothreads:<br />
<domain><br />
<cputune><br />
<iothreadpin iothread='1' cpuset='0-1,6-7'/><br />
<br />
==== Example with virt-manager ====<br />
<br />
This will create a {{ic|1=virtio-blk}} device:<br />
# Open the virtual machine preferences<br />
# Go to {{ic|Add Hardware > Storage}}<br />
# Create or choose a storage file<br />
# Select {{ic|Device Type: Disk device}} and {{ic|Bus type: VirtIO}}<br />
# Click Finish<br />
<br />
=== Virtio network ===<br />
<br />
The default NIC models rtl8139 or e1000 can be a bottleneck for gigabit+ speeds and have a significant amount of CPU overhead compared to {{ic|1=virtio-net}}.<br />
<br />
* Select {{ic|1=virtio}} as the model for the NIC with libvirt or use the {{ic|1=virtio-net-pci}} device in bare qemu<br />
* Windows needs the {{ic|1=NetKVM}} driver from [https://docs.fedoraproject.org/en-US/quick-docs/creating-windows-virtual-machines-using-virtio-drivers/ Windows virtio drivers]<br />
* Virtio uses vhost-net by default for in-kernel packet processing without exiting to userspace<br />
* Multiqueue can enabled for a further speedup with multiple connections but typically will not boost single stream speeds. For libvirt add {{ic|1=<driver queues='8'/>}} under the interface tag<br />
* Zero copy transmit may also be enabled on macvtap by setting the module parameter {{ic|1=vhost_net.experimental_zcopytx=1}} but this may actually have worse performance, see [https://github.com/torvalds/linux/commit/098eadce3c622c07b328d0a43dda379b38cf7c5e commit]<br />
<br />
Libvirt example with a bridge:<br />
<br />
<interface type='bridge'><br />
<mac address="52:54:00:6d:6e:2e"/><br />
<source bridge='br0'/><br />
<model type='virtio'/><br />
<driver queues='8'/><br />
</interface><br />
<br />
MACVTAP example with a bridge:<br />
<br />
<interface type="direct"><br />
<source dev="'''eno1'''" mode="vepa"/><br />
<target dev="macvtap0"/><br />
<model type="virtio"/><br />
<alias name="net0"/><br />
</interface><br />
Possible options for mode are 'vepa', 'bridge', 'private', and 'passthrough'. A guide with decriptions of the differences is available from redhat[https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-virtual_networking-directly_attaching_to_physical_interface].<br />
<br />
Replace the source {{ic|/dev}} device with your own device address. You can get your local address with the following command:<br />
<br />
{{hc|$ ip link|<br />
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000<br />
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00<br />
2: '''eno1''': <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000<br />
link/ether 30:9c:23:ac:51:d0 brd ff:ff:ff:ff:ff:ff<br />
altname enp0s31f6<br />
}}<br />
<br />
=== Further tuning ===<br />
<br />
More specialized virtual machine tuning tips are available at [https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/virtualization_tuning_and_optimization_guide/index Red Hat's Virtualization Tuning and Optimization Guide].<br />
<br />
== Special procedures ==<br />
<br />
Certain setups require specific configuration tweaks in order to work properly. If you are having problems getting your host or your virtual machine to work properly, see if your system matches one of the cases below and try adjusting your configuration accordingly.<br />
<br />
=== Using identical guest and host GPUs ===<br />
<br />
{{Expansion|A number of users have been having issues with this, it should probably be adressed by the article.|Talk:PCI passthrough via OVMF#Additionnal sections}}<br />
<br />
Due to how vfio-pci uses your vendor and device id pair to identify which device they need to bind to at boot, if you have two GPUs sharing such an ID pair you will not be able to get your passthrough driver to bind with just one of them. This sort of setup makes it necessary to use a script, so that whichever driver you are using is instead assigned by pci bus address using the {{ic|driver_override}} mechanism.<br />
<br />
==== Script variants ====<br />
<br />
===== Passthrough all GPUs but the boot GPU =====<br />
<br />
Here, we will make a script to bind vfio-pci to all GPUs but the boot gpu. Create the script {{ic|/usr/local/bin/vfio-pci-override.sh}}:<br />
<br />
{{bc|1=<br />
#!/bin/sh<br />
<br />
for i in /sys/bus/pci/devices/*/boot_vga; do<br />
if [ $(cat "$i") -eq 0 ]; then<br />
GPU="${i%/boot_vga}"<br />
AUDIO="$(echo "$GPU" {{!}} sed -e "s/0$/1/")"<br />
USB="$(echo "$GPU" {{!}} sed -e "s/0$/2/")"<br />
echo "vfio-pci" > "$GPU/driver_override"<br />
if [ -d "$AUDIO" ]; then<br />
echo "vfio-pci" > "$AUDIO/driver_override"<br />
fi<br />
if [ -d "$USB" ]; then<br />
echo "vfio-pci" > "$USB/driver_override"<br />
fi<br />
fi<br />
done<br />
<br />
modprobe -i vfio-pci<br />
}}<br />
<br />
===== Passthrough selected GPU =====<br />
<br />
In this case we manually specify the GPU to bind.<br />
<br />
{{bc|1=<br />
#!/bin/sh<br />
<br />
DEVS="0000:03:00.0 0000:03:00.1"<br />
<br />
if [ ! -z "$(ls -A /sys/class/iommu)" ]; then<br />
for DEV in $DEVS; do<br />
echo "vfio-pci" > /sys/bus/pci/devices/$DEV/driver_override<br />
done<br />
fi<br />
<br />
modprobe -i vfio-pci<br />
}}<br />
<br />
===== Passthrough IOMMU Group based of GPU =====<br />
<br />
Simplifying the passthrough of other necessary devices from selected GPU(s). <br />
Things like the graphicscard's onboard Audio, USB and RGB controllers.<br />
<br />
{{bc|1=<br />
#!/bin/sh<br />
<br />
DEVS="0000:03:00.0"<br />
<br />
if [ ! -z "$(ls -A /sys/class/iommu)" ]; then<br />
for DEV in $DEVS; do<br />
for IOMMUDEV in $(ls /sys/bus/pci/devices/$DEV/iommu_group/devices) ; do<br />
echo "vfio-pci" > /sys/bus/pci/devices/$IOMMUDEV/driver_override<br />
done<br />
done<br />
fi<br />
<br />
modprobe -i vfio-pci<br />
}}<br />
<br />
==== Script installation ====<br />
<br />
Edit {{ic|/etc/mkinitcpio.conf}}:<br />
<br />
# Add {{ic|modconf}} to the [[mkinitcpio#HOOKS|HOOKS]] array and {{ic|/usr/local/bin/vfio-pci-override.sh}} to the [[mkinitcpio#BINARIES and FILES|FILES]] array.<br />
<br />
Edit {{ic|/etc/modprobe.d/vfio.conf}}:<br />
<br />
# Add the following line: {{ic|install vfio-pci /usr/local/bin/vfio-pci-override.sh}}<br />
# [[Regenerate the initramfs]] and reboot.<br />
<br />
=== Passing the boot GPU to the guest ===<br />
<br />
{{Expansion|This is related to VBIOS issues and should be moved into a separate section regarding VBIOS compatibility.|section=UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
The GPU marked as {{ic|boot_vga}} is a special case when it comes to doing PCI passthroughs, since the BIOS needs to use it in order to display things like boot messages or the BIOS configuration menu. To do that, it makes [https://www.redhat.com/archives/vfio-users/2016-May/msg00224.html a copy of the VGA boot ROM which can then be freely modified]. This modified copy is the version the system gets to see, which the passthrough driver may reject as invalid. As such, it is generally recommended to change the boot GPU in the BIOS configuration so the host GPU is used instead or, if that is not possible, to swap the host and guest cards in the machine itself.<br />
<br />
=== Using Looking Glass to stream guest screen to the host ===<br />
<br />
It is possible to make a virtual machine share the monitor, and optionally a keyboard and a mouse with a help of [https://looking-glass.io/ Looking Glass].<br />
<br />
==== Adding IVSHMEM Device to virtual machines ====<br />
<br />
Looking glass works by creating a shared memory buffer between a host and a guest. This is a lot faster than streaming frames via localhost, but requires additional setup. <br />
<br />
With your virtual machine turned off open the machine configuration<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<devices><br />
...<br />
<shmem name='looking-glass'><br />
<model type='ivshmem-plain'/><br />
<size unit='M'>32</size><br />
</shmem><br />
</devices><br />
...<br />
}}<br />
<br />
You should replace 32 with your own calculated value based on what resolution you are going to pass through. It can be calculated like this:<br />
<br />
width x height x 4 x 2 = total bytes<br />
total bytes / 1024 / 1024 = total mebibytes + 10<br />
<br />
For example, in case of 1920x1080<br />
<br />
1920 x 1080 x 4 x 2 = 16,588,800 bytes<br />
16,588,800 / 1024 / 1024 = 15.82 MiB + 10 = 25.82<br />
<br />
The result must be '''rounded up''' to the nearest power of two, and since 25.82 is bigger than 16 we should choose 32.<br />
<br />
Next create a configuration file to create the shared memory file on boot<br />
<br />
{{hc|/etc/tmpfiles.d/10-looking-glass.conf|2=<br />
f /dev/shm/looking-glass 0660 '''user''' kvm -<br />
}}<br />
<br />
Replace user with your username.<br />
<br />
Ask systemd-tmpfiles to create the shared memory file now without waiting to next boot<br />
<br />
# systemd-tmpfiles --create /etc/tmpfiles.d/10-looking-glass.conf<br />
<br />
==== Installing the IVSHMEM Host to Windows guest ====<br />
<br />
Currently Windows would not notify users about a new IVSHMEM device, it would silently install a dummy driver. To actually enable the device you have to go into device manager and update the driver for the device under the "System Devices" node for '''"PCI standard RAM Controller"'''. Download the signed driver [https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/upstream-virtio/ from Red Hat].<br />
<br />
Once the driver is installed you must download a matching [https://looking-glass.io/downloads looking-glass-host] package that matches the client you will install from AUR, and install it on your guest. In order to run it you would also need to install Microsoft Visual C++ Redistributable from [https://www.visualstudio.com/downloads/ Microsoft]. The recent version will automatically install a service that starts the daemon on boot. The logs of the host daemon are located at {{ic|%ProgramData%\Looking Glass (host)\looking-glass-host.txt}} on the guest system.<br />
<br />
==== Setting up the null video device ====<br />
<br />
(Retrieved from: https://looking-glass.io/docs/stable/install/#spice-server)<br />
<br />
If you would like to use Spice to give you keyboard and mouse input along with clipboard sync support, make sure you have a {{ic|1=<graphics type='spice'>}} device, then:<br />
<br />
* Find your {{ic|<video>}} device, and set {{ic|1=<model type='none'/>}}<br />
* If you cannot find it, make sure you have a {{ic|<graphics>}} device, save and edit again<br />
<br />
==== Getting a client ====<br />
<br />
Looking glass client can be installed from AUR using {{AUR|looking-glass}} or {{AUR|looking-glass-git}} packages.<br />
<br />
You can start it once the virtual machine is set up and running<br />
<br />
$ looking-glass-client<br />
<br />
If you do not want to use Spice to control the guest mouse and keyboard you can disable the Spice server.<br />
<br />
$ looking-glass-client -s<br />
<br />
Additionally you may want to start Looking Glass Client as full screen, otherwise the image may be scaled down resulting in poor image fidelity.<br />
<br />
$ looking-glass-client -F<br />
<br />
Launch with the {{ic|--help}} option for further information.<br />
<br />
==== Additional information ====<br />
<br />
Refer to the [https://looking-glass.io/docs/ upstream documentation] for further details.<br />
<br />
=== Swap peripherals to and from the Host ===<br />
<br />
Looking Glass includes a Spice client in order to control mouse movement on the Windows guest. However this may have too much latency for certain applications, such as gaming. An alternative method is passing through specific USB devices for minimal latency. This allows for switching the devices between host and guest.<br />
<br />
First create a .xml file for the device(s) you wish to pass-through, which libvirt will use to identify the device.<br />
<br />
{{hc|~/.VFIOinput/input_1.xml|2=<br />
<hostdev mode='subsystem' type='usb' managed='no'><br />
<source><br />
<vendor id='0x[Before Colon]'/><br />
<product id='0x[After Colon]'/><br />
</source><br />
</hostdev><br />
}}<br />
<br />
Replace [Before/After Colon] with the contents of the 'lsusb' command, specific to the device you want to pass-through.<br />
<br />
For instance my mouse is {{ic|Bus 005 Device 002: ID 1532:0037 Razer USA, Ltd}} so I would replace {{ic|vendor id}} with 1532, and {{ic|product id}} with 0037.<br />
<br />
Repeat this process for any additional USB devices you want to pass-through. If your mouse / keyboard has multiple entries in {{ic|lsusb}}, perhaps if it is wireless, then create additional xml files for each.<br />
<br />
{{Note|Do not forget to change the path & name of the script(s) above and below to match your user and specific system.}}<br />
<br />
Next a bash script file is needed to tell libvirt what to attach/detach the USB devices to the guest.<br />
<br />
{{hc|~/.VFIOinput/input_attach.sh|2=<br />
#!/bin/sh<br />
<br />
virsh attach-device [VirtualMachine-Name] [USBdevice]<br />
}}<br />
<br />
Replace [VirtualMachine-Name] with the name of your virtual machine, which can be seen under virt-manager. Additionally replace [USBdevice] with the '''full''' path to the .xml file for the device you wish to pass-through. Add additional lines for more than 1 device. For example here is my script:<br />
<br />
{{hc|~/.VFIOinput/input_attach.sh|2=<br />
#!/bin/sh<br />
<br />
virsh attach-device win10 /home/$USER/.VFIOinput/input_mouse.xml<br />
virsh attach-device win10 /home/$USER/.VFIOinput/input_keyboard.xml<br />
}}<br />
<br />
Next duplicate the script file and replace {{ic|attach-device}} with {{ic|detach-device}}. Ensure both scripts are [[executable]].<br />
<br />
This 2 script files can now be executed to attach or detach your USB devices from the host to the guest virtual machine. It is important to note that they may need to be executed as root. To run the script from the Windows virtual machine, one possibility is using [[PuTTY]] to [[SSH]] into the host, and execute the script. On Windows PuTTY comes with plink.exe which can execute singular commands over SSH before then logging out, instead of opening a SSH terminal, all in the background.<br />
<br />
{{hc|detach_devices.bat|2=<br />
"C:\Program Files\PuTTY\plink.exe" root@$HOST_IP -pw $ROOTPASSWORD /home/$USER/.VFIOinput/input_detach.sh<br />
}}<br />
<br />
Replace {{ic|$HOST_IP}} with the Host [[Network configuration#IP addresses|IP Address]] and $ROOTPASSWORD with the root password.<br />
<br />
{{warning|This method is insecure if somebody has access to your virtual machine, since they could open the file and read your password. It is advisable to use [[SSH keys]] instead!}}<br />
<br />
You may also want to execute the script files using key binds. On Windows one option is [https://autohotkey.com/ Autohotkey], and on the Host [[Xbindkeys]]. Because of the need to run the scripts as root, you may also need to use [[Polkit]] or [[Sudo]] which can both be used to authenticate specific executables as able to run as root without needing a password.<br />
<br />
=== Bypassing the IOMMU groups (ACS override patch) ===<br />
<br />
If you find your PCI devices grouped among others that you do not wish to pass through, you may be able to seperate them using Alex Williamson's ACS override patch. Make sure you understand [https://vfio.blogspot.com/2014/08/iommu-groups-inside-and-out.html the potential risk] of doing so. <br />
<br />
You will need a kernel with the patch applied. The easiest method to acquiring this is through the {{pkg|linux-zen}} or {{AUR|linux-vfio}} package.<br />
<br />
In addition, the ACS override patch needs to be enabled with kernel command line options. The patch file adds the following documentation:<br />
<br />
pcie_acs_override =<br />
[PCIE] Override missing PCIe ACS support for:<br />
downstream<br />
All downstream ports - full ACS capabilties<br />
multifunction<br />
All multifunction devices - multifunction ACS subset<br />
id:nnnn:nnnn<br />
Specfic device - full ACS capabilities<br />
Specified as vid:did (vendor/device ID) in hex<br />
<br />
The option {{ic|1=pcie_acs_override=downstream,multifunction}} should break up as many devices as possible.<br />
<br />
After installation and configuration, reconfigure your [[Kernel parameters|bootloader kernel parameters]] to load the new kernel with the {{ic|1=pcie_acs_override=}} option enabled.<br />
<br />
== Plain QEMU without libvirt ==<br />
<br />
Instead of setting up a virtual machine with the help of libvirt, plain QEMU commands with custom parameters can be used for running the virtual machine intended to be used with PCI passthrough. This is desirable for some use cases like scripted setups, where the flexibility for usage with other scripts is needed.<br />
<br />
To achieve this after [[#Setting up IOMMU]] and [[#Isolating the GPU]], follow the [[QEMU]] article to setup the virtualized environment, [[QEMU#Enabling KVM|enable KVM]] on it and use the flag {{ic|1=-device vfio-pci,host=07:00.0}} replacing the identifier (07:00.0) with your actual device's ID that you used for the GPU isolation earlier.<br />
<br />
For utilizing the OVMF firmware, make sure the {{Pkg|edk2-ovmf}} package is installed, copy the UEFI variables from {{ic|/usr/share/edk2-ovmf/x64/OVMF_VARS.fd}} to temporary location like {{ic|/tmp/MY_VARS.fd}} and finally specify the OVMF paths by appending the following parameters to the QEMU command (order matters):<br />
<br />
* {{ic|1=-drive if=pflash,format=raw,readonly=on,file=/usr/share/edk2-ovmf/x64/OVMF_CODE.fd}} for the actual OVMF firmware binary, note the readonly option<br />
* {{ic|1=-drive if=pflash,format=raw,file=/tmp/MY_VARS.fd}} for the variables<br />
<br />
{{Note|<br />
* Make sure that {{ic|1=OVMF_CODE.fd}} is given as a command line parameter before {{ic|1=MY_VARS.fd}}. The boot sequence will fail otherwise.<br />
* QEMU's default SeaBIOS can be used instead of OVMF, but it is not recommended as it can cause issues with passthrough setups.<br />
}}<br />
<br />
It is recommended to study the QEMU article for ways to enhance the performance by using the [[QEMU#Installing virtio drivers|virtio drivers]] and other further configurations for the setup.<br />
<br />
You also might have to use the {{ic|1=-cpu host,kvm=off}} parameter to forward the host's CPU model info to the virtual machine and fool the virtualization detection used by Nvidia's and possibly other manufacturers' device drivers trying to block the full hardware usage inside a virtualized system.<br />
<br />
== Passing through other devices ==<br />
<br />
=== USB controller ===<br />
<br />
If your motherboard has multiple USB controllers mapped to multiple groups, it is possible to pass those instead of USB devices. Passing an actual controller over an individual USB device provides the following advantages : <br />
<br />
* If a device disconnects or changes ID over the course of an given operation (such as a phone undergoing an update), the virtual machine will not suddenly stop seeing it.<br />
* Any USB port managed by this controller is directly handled by the virtual machine and can have its devices unplugged, replugged and changed without having to notify the hypervisor.<br />
* Libvirt will not complain if one of the USB devices you usually pass to the guest is missing when starting the virtual machine.<br />
<br />
Unlike with GPUs, drivers for most USB controllers do not require any specific configuration to work on a virtual machine and control can normally be passed back and forth between the host and guest systems with no side effects.<br />
<br />
{{Warning|Make sure your USB controller supports resetting: [[#Passing through a device that does not support resetting]]}}<br />
<br />
You can find out which USB devices correspond to which controller and how various ports and devices are assigned to each one of them using this command:<br />
<br />
{{hc|1=$ for usb_ctrl in /sys/bus/pci/devices/*/usb*; do pci_path=${usb_ctrl%/*}; iommu_group=$(readlink $pci_path/iommu_group); echo "Bus $(cat $usb_ctrl/busnum) --> ${pci_path##*/} (IOMMU group ${iommu_group##*/})"; lsusb -s ${usb_ctrl#*/usb}:; echo; done|2=<br />
Bus 1 --> 0000:00:1a.0 (IOMMU group 4)<br />
Bus 001 Device 004: ID 04f2:b217 Chicony Electronics Co., Ltd Lenovo Integrated Camera (0.3MP)<br />
Bus 001 Device 007: ID 0a5c:21e6 Broadcom Corp. BCM20702 Bluetooth 4.0 [ThinkPad]<br />
Bus 001 Device 008: ID 0781:5530 SanDisk Corp. Cruzer<br />
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />
<br />
Bus 2 --> 0000:00:1d.0 (IOMMU group 9)<br />
Bus 002 Device 006: ID 0451:e012 Texas Instruments, Inc. TI-Nspire Calculator<br />
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub<br />
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub<br />
}}<br />
<br />
This laptop has 3 USB ports managed by 2 USB controllers, each with their own IOMMU group. In this example, Bus 001 manages a single USB port (with a SanDisk USB pendrive plugged into it so it appears on the list), but also a number of internal devices, such as the internal webcam and the bluetooth card. Bus 002, on the other hand, does not apprear to manage anything except for the calculator that is plugged into it. The third port is empty, which is why it does not show up on the list, but is actually managed by Bus 002.<br />
<br />
Once you have identified which controller manages which ports by plugging various devices into them and decided which one you want to passthrough, simply add it to the list of PCI host devices controlled by the virtual machine in your guest configuration. No other configuration should be needed.<br />
<br />
{{Note|If your USB controller does not support resetting, is not in an isolated group, or is otherwise unable to be passed through then it may still be possible to accomplish similar results through [[udev]] rules. See [https://github.com/olavmrk/usb-libvirt-hotplug] which allows any device connected to specified USB ports to be automatically attached to a virtual machine.}}<br />
<br />
=== Passing audio from virtual machine to host via PulseAudio ===<br />
<br />
It is possible to route the virtual machine's audio to the host as an application using libvirt. This has the advantage of multiple audio streams being routable to one host output, and working with audio output devices that do not support passthrough. [[PulseAudio]] is required for this to work.<br />
<br />
First, remove the comment from the {{ic|1=#user = ""}} line. Then add your username in the quotations. This tells QEMU which user's pulseaudio stream to route through.<br />
<br />
{{hc|/etc/libvirt/qemu.conf|2=<br />
user = "example"<br />
}}<br />
<br />
An emulated audio setup consists of two components: An emulated sound device exposed to the guest and an audio backend connecting the sound device to the host's PulseAudio.<br />
<br />
Of the emulated sound devices available, two are of main interest: ICH9 and usb-audio. ICH9 features both output and input but is limited to stereo. usb-audio only features audio output but supports up to 6 channels in 5.1 configuration. For ICH9 remove any pre-existing audio backend in the {{ic|<devices>}} section and add:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<sound model='ich9'><br />
<codec type='micro'/><br />
<audio id='1'/><br />
</sound><br />
<audio id='1' type='pulseaudio' serverName='/run/user/1000/pulse/native'/><br />
}}<br />
<br />
Note the matching {{ic|id}} elements. The example above assumes a single-user system with user ID 1000. Use the {{ic|id}} command to find the correct ID. You can also use the {{ic|/tmp}} directory if you have multiple users accessing PulseAudio:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<audio id='1' type='pulseaudio' serverName='unix:/tmp/pulse-socket'/><br />
}}<br />
<br />
If you get crackling or distorted sound, try experimenting with some latency settings. The following example uses 20000 microseconds:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<audio id="1" type="pulseaudio" serverName="/run/user/1000/pulse/native"><br />
<input latency="20000"/><br />
<output latency="20000"/><br />
</audio><br />
}}<br />
<br />
You can also try disabling the software mixer included in QEMU. This should, in theory, be more efficient and allow for lower latencies since mixing will then take place on your host only:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<audio id="1" type="pulseaudio" serverName="/run/user/1000/pulse/native"><br />
<input mixingEngine="no"/><br />
<output mixingEngine="no"/><br />
</audio><br />
}}<br />
<br />
For usb-audio, the corresponding elements read<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<sound model='usb'><br />
<audio id='1'/><br />
</sound><br />
<audio id='1' type='pulseaudio' serverName='/run/user/1000/pulse/native'/><br />
}}<br />
<br />
However, if a 5.1 configuration is required the sound device needs to be configured via QEMU command line arguments:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
</devices><br />
<qemu:commandline><br />
<qemu:arg value='-device'/><br />
<qemu:arg value='usb-audio,id=sound0,audiodev=audio1,multi=on'/><br />
</qemu:commandline><br />
</domain><br />
}}<br />
<br />
The {{ic|audiodev}} tag has to be set to match the audio backend's {{ic|id}} element. {{ic|1=id='1'|2==}} corresponds to {{ic|audio1}} and so on. <br />
<br />
{{Note|1=<nowiki/><br />
* You can have multiple audio backends, by simply specifying {{ic|<audio>}}/{{ic|-audiodev}} multiple times in your XML and by assigning them different ids. This can be useful for a use case of having two identical backends. With PulseAudio each backend is a separate stream and can be routed to different output devices on the host (using a pulse mixer like {{Pkg|pavucontrol}} or {{Pkg|pulsemixer}}).<br />
* USB 3 emulation is needed in Libvirt/QEMU to enable the usb-audio.<br />
* It is recommended to enable MSI interrupts with a tool such as [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/] on the ICH9 audio device to mitigate any crackling, stuttering, speedup, or no audio at all after virtual machine restart.<br />
* If audio is crackling/stuttering/speedup etc. is still present you may want to adjust parameters such as {{ic|buffer-length}} and {{ic|timer-period}}, more information on these parameters and more can be found in the {{man|1|qemu}} manual.<br />
* Some audio chipsets such as [https://bugzilla.kernel.org/show_bug.cgi?id=195303 Realtek alc1220] may also have issues out of the box so do consider this when using any audio emulation with QEMU.<br />
* Improper pinning or heavy host usage without using [[#Isolating pinned CPUs|isolcpus]] can also influence sound bugs, especially while gaming in a virtual machine.<br />
}}<br />
<br />
=== Passing audio from virtual machine to host via JACK and PipeWire ===<br />
<br />
It is also possible to pass the virtual machine's audio to the host via JACK and PipeWire.<br />
<br />
First, make sure you have a working [[PipeWire]] setup with [[PipeWire#JACK_clients|JACK support]].<br />
<br />
Next, you will need to tell libvirt to run QEMU as your user:<br />
<br />
{{hc|/etc/libvirt/qemu.conf|2=<br />
user = "example"<br />
}}<br />
<br />
As a final preparation, the XML scheme has to be extended to allow passing of environment variables. For this, modify the virtual machine domain configuration<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
<domain type='kvm'><br />
}}<br />
<br />
to<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
<domain type='kvm' xmlns:qemu='<nowiki>http://libvirt.org/schemas/domain/qemu/1.0</nowiki>'><br />
}}<br />
<br />
Then, you can add the actual audio config to your virtual machine:<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<nowiki/><br />
<devices><br />
...<br />
<audio id="1" type="jack"><br />
<input clientName="vm-win10" connectPorts="your-input"/><br />
<output clientName="vm-win10" connectPorts="your-output"/><br />
</audio><br />
</devices><br />
<qemu:commandline><br />
<qemu:env name="PIPEWIRE_RUNTIME_DIR" value="/run/user/1000"/><br />
<qemu:env name="PIPEWIRE_LATENCY" value="512/48000"/><br />
</qemu:commandline><br />
</domain><br />
}}<br />
{{Note|Use a tool like {{Pkg|carla}} to figure out which input and outputs you want.}}<br />
<br />
Note the matching {{ic|id}} elements. Above's example assumes a single-user system with user ID 1000. Use the {{ic|id}} command to find the correct ID.<br />
<br />
You might have to play with the {{ic|PIPEWIRE_LATENCY}} values to get to the desired latency without crackling.<br />
<br />
=== Passing audio from virtual machine to host via Scream ===<br />
<br />
It is possible to pass the virtual machine's audio through a bridged network such as the one provided by Libvirt or by adding a IVSHMEM device to the host by using a application called [https://github.com/duncanthrax/scream Scream]. This section will only cover using PulseAudio as a receiver on the host.<br />
See the project page for more details and instructions on other methods.<br />
<br />
==== Using Scream with a bridged network ====<br />
<br />
{{Note|<br />
* This is the ''preferred'' way to use this, although results may vary per user<br />
* It is recommend to use the [[#Virtio network]] adapter while using Scream, other virtual adapters provided by QEMU such as '''e1000e''' may lead to poor performance<br />
}}<br />
<br />
To use scream via your network you will want to find your bridge name via {{ic|1=ip a}}, in most cases it will be called '''br0''' or '''virbr0'''. Below is a example of the command needed to start the Scream application:<br />
<br />
$ scream -o pulse -i virbr0 &<br />
<br />
{{Warning| This will not work with a '''macvtap bridge''' as that does not allow host to guest communication, also make sure you have the proper firewall ports open for it to communicate with the virtual machine}}<br />
<br />
==== Adding the IVSHMEM device to use Scream with IVSHMEM ====<br />
<br />
With the virtual machine turned off, edit the machine configuration<br />
<br />
{{hc|$ virsh edit ''vmname''|2=<br />
...<br />
<devices><br />
...<br />
<shmem name='scream-ivshmem'><br />
<model type='ivshmem-plain'/><br />
<size unit='M'>2</size><br />
</shmem><br />
</devices><br />
...<br />
}}<br />
<br />
In the above configuration, the size of the IVSHMEM device is 2MB (the recommended amount). Change this as needed.<br />
<br />
Now refer to [[#Adding IVSHMEM Device to virtual machines]] to configure the host to create the shared memory file on boot, replacing {{ic|looking-glass}} with {{ic|scream-ivshmem}}.<br />
<br />
===== Configuring the Windows guest for IVSHMEM =====<br />
<br />
The correct driver must be installed for the IVSHMEM device on the guest. <br />
See [[#Installing the IVSHMEM Host to Windows guest]]. Ignore the part about {{ic|looking-glass-host}}.<br />
<br />
Install the [https://github.com/duncanthrax/scream/releases Scream] virtual audio driver on the guest. <br />
If you have secure boot enabled for your virtual machine, you may need to disable it. <br />
<br />
Using the registry editor, set the DWORD {{ic|HKLM\SYSTEM\CurrentControlSet\Services\Scream\Options\UseIVSHMEM}} to the size of the IVSHMEM device in MB.<br />
Note that scream identifies its IVSHMEM device using its size, so make sure there is only one device of that size (the suggested default is {{ic|2}} for 2MB).<br />
<br />
Use the following command in an admin CMD shell to create both key and DWORD: {{ic|REG ADD HKLM\SYSTEM\CurrentControlSet\Services\Scream\Options /v UseIVSHMEM /t REG_DWORD /d 2}} ([https://github.com/duncanthrax/scream sourced from scream on Github])<br />
<br />
====== Configuring the host ======<br />
<br />
Install {{AUR|scream}}.<br />
<br />
Create a [[systemd/User|systemd user service]] to control the receiver:<br />
<br />
{{hc|~/.config/systemd/user/scream-ivshmem-pulse.service|2=<br />
[Unit]<br />
Description=Scream IVSHMEM pulse receiver<br />
After=pulseaudio.service<br />
Wants=pulseaudio.service<br />
<br />
[Service]<br />
Type=simple<br />
ExecStartPre=/usr/bin/truncate -s 0 /dev/shm/scream-ivshmem<br />
ExecStartPre=/usr/bin/dd if=/dev/zero of=/dev/shm/scream-ivshmem bs=1M count=2<br />
ExecStart=/usr/bin/scream -m /dev/shm/scream-ivshmem<br />
<br />
[Install]<br />
WantedBy=default.target<br />
}}<br />
<br />
Edit {{ic|1=count=2}} with the size of the IVSHMEM device in MiB.<br />
<br />
{{Tip|If you are using [[PipeWire]], replace {{ic|pulseaudio.service}} with {{ic|pipewire-pulse.service}}.}}<br />
<br />
Now [[start]] the {{ic|scream-ivshmem-pulse.service}} [[user unit]].<br />
<br />
To have it automatically start on next login, [[enable]] the [[user unit]].<br />
<br />
=== Physical disk/partition ===<br />
<br />
Raw and qcow2 especially can have noticeable overhead for heavy IO. A whole disk or a partition may be used directly to bypass the filesystem and improve I/O performance. If you wish to dual boot the guest OS natively you would need to pass the entire disk without any partitioning. It is suggested to use /dev/disk/by- paths to refer to the disk since /dev/sdX entries can change between boots. To find out which disk/partition is associated with the one you would like to pass:<br />
<br />
{{hc|$ ls -l /dev/disk/by-id/*|<br />
/dev/disk/by-id/ata-ST1000LM002-9VQ14L_Z0501SZ9 -> ../../sdd<br />
}}<br />
<br />
See [[#Virtio disk]] on how to add these with libvirt XML. You can also add the disk with Virt-Manager's '''Add Hardware''' menu and then type the disk you want in the '''Select or create custom storage''' box, e.g. '''/dev/disk/by-id/ata-ST1000LM002-9VQ14L_Z0501SZ9'''<br />
<br />
=== Gotchas ===<br />
<br />
==== Passing through a device that does not support resetting ====<br />
<br />
When the virtual machine shuts down, all devices used by the guest are deinitialized by its OS in preparation for shutdown. In this state, those devices are no longer functional and must then be power-cycled before they can resume normal operation. Linux can handle this power-cycling on its own, but when a device has no known reset methods, it remains in this disabled state and becomes unavailable. Since Libvirt and Qemu both expect all host PCI devices to be ready to reattach to the host before completely stopping the virtual machine, when encountering a device that will not reset, they will hang in a "Shutting down" state where they will not be able to be restarted until the host system has been rebooted. It is therefore recommended to only pass through PCI devices which the kernel is able to reset, as evidenced by the presence of a {{ic|reset}} file in the PCI device sysfs node, such as {{ic|/sys/bus/pci/devices/0000:00:1a.0/reset}}.<br />
<br />
The following bash command shows which devices can and cannot be reset.<br />
<br />
{{hc|<nowiki>for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done</nowiki>|<br />
IOMMU group 0<br />
00:00.0 Host bridge [0600]: Intel Corporation Xeon E3-1200 v2/Ivy Bridge DRAM Controller [8086:0158] (rev 09)<br />
IOMMU group 1<br />
00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v2/3rd Gen Core processor PCI Express Root Port [8086:0151] (rev 09)<br />
01:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 720] [10de:1288] (rev a1)<br />
01:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)<br />
IOMMU group 2<br />
00:14.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB xHCI Host Controller [8086:1e31] (rev 04)<br />
IOMMU group 4<br />
[RESET] 00:1a.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #2 [8086:1e2d] (rev 04)<br />
IOMMU group 5<br />
[RESET] 00:1b.0 Audio device [0403]: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller [8086:1e20] (rev 04)<br />
IOMMU group 10<br />
[RESET] 00:1d.0 USB controller [0c03]: Intel Corporation 7 Series/C210 Series Chipset Family USB Enhanced Host Controller #1 [8086:1e26] (rev 04)<br />
IOMMU group 13<br />
06:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)<br />
06:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)<br />
}}<br />
<br />
This signals that the xHCI USB controller in 00:14.0 cannot be reset and will therefore stop the virtual machine from shutting down properly, while the integrated sound card in 00:1b.0 and the other two controllers in 00:1a.0 and 00:1d.0 do not share this problem and can be passed without issue.<br />
<br />
== Complete setups and examples ==<br />
<br />
For many reasons users may seek to see [[PCI_passthrough_via_OVMF/Examples|complete passthrough setup examples]].<br />
<br />
These examples offer a supplement to existing hardware compatibility lists. Additionally, if you have trouble configuring a certain mechanism in your setup, you might find these examples very valuable. Users there have described their setups in detail, and some have provided examples of their configuration files as well. <br />
<br />
We encourage those who successfully build their system from this resource to help improve it by contributing their builds. Due to the many different hardware manufacturers involved, the seemingly significant lack of sufficient documentation, as well as other issues due to the nature of this process, community contributions are necessary.<br />
<br />
== Troubleshooting ==<br />
<br />
If your issue is not mentioned below, you may want to browse [[QEMU#Troubleshooting]].<br />
<br />
=== QEMU 4.0: Unable to load graphics drivers/BSOD/Graphics stutter after driver install using Q35 ===<br />
<br />
Starting with QEMU 4.0, the Q35 machine type changes the default {{ic|kernel_irqchip}} from {{ic|off}} to {{ic|split}} which breaks some guest devices, such as nVidia graphics (the driver fails to load / black screen / code 43 / graphics stutters, usually when mouse moving). Switch to full KVM mode instead by adding {{ic|1=<ioapic driver='kvm'/>}} under libvirt's {{ic|<features>}} tag in your virtual machine configuration or by adding {{ic|1=kernel_irqchip=on}} in the {{ic|-machine}} QEMU arg.<br />
<br />
=== QEMU 5.0: host-passthrough with kernel version 5.5 to 5.8.1 when using Zen 2 processors: Windows 10 BSOD loop 'KERNEL SECURITY CHECK FAILURE' ===<br />
<br />
{{Note|As of kernel version 5.8.2, disabling STIBP is not required anymore.}}<br />
<br />
Starting with QEMU 5.0 virtual machines running on Zen 2 and newer kernels than 5.4 will cause a BSOD loop of: 'KERNEL SECURITY CHECK FAILURE'. This can be fixed by either updating to kernel version 5.8.2 or higher, or disabling STIBP:<br />
<cpu mode='host-passthrough' ...><br />
...<br />
<feature policy='disable' name='amd-stibp'/><br />
...<br />
</cpu><br />
This requires libvirt 6.5 or higher. On older versions, several workarounds exist:<br />
* Switch CPU mode from {{ic|host-passthrough}} to {{ic|host-model}}. This only works on libvirt 6.4 or lower.<br />
* Manually patch {{Pkg|qemu-desktop}} in order to revert [https://github.com/qemu/qemu/commit/143c30d4d346831a09e59e9af45afdca0331e819 this] commit.<br />
* On qemu commandline, add {{ic|1=amd-stibp=off}} to the cpu flags string. This can also be invoked through libvirt via a {{ic|<qemu:commandline>}} entry.<br />
<br />
=== "Error 43: Driver failed to load" with mobile (Optimus/max-q) nvidia GPUs ===<br />
<br />
This error occurs because the Nvidia driver wants to check the status of the power supply. If no battery is present, the driver does not work. Whether Libvirt or Quemu, by default none of them provide the possibility to simulate a battery. This might also result in a reduced screen resolution and the Nvidia Desktop Manager refusing to load when right-clicking the desktop, saying it requires Windows 10, a compatible GPU and the Nvidia graphics driver.<br />
<br />
You can however create and add a custom acpi table file to the virtual machine which will do the work.<br />
<br />
First you have to create the custom acpi table file by pasting the following base64 string [https://base64.guru/converter/decode/file here] and save the result file as SSDT1.dat:<br />
{{bc|1=<br />
U1NEVKEAAAAB9EJPQ0hTAEJYUENTU0RUAQAAAElOVEwYEBkgoA8AFVwuX1NCX1BDSTAGABBMBi5f<br />
U0JfUENJMFuCTwVCQVQwCF9ISUQMQdAMCghfVUlEABQJX1NUQQCkCh8UK19CSUYApBIjDQELcBcL<br />
cBcBC9A5C1gCCywBCjwKPA0ADQANTElPTgANABQSX0JTVACkEgoEAAALcBcL0Dk=<br />
}}<br />
<br />
Next you must add the processed file to the main domain of the virtual machine:<br />
{{bc|<nowiki><br />
<domain xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0" type="kvm"><br />
...<br />
<qemu:commandline><br />
<qemu:arg value="-acpitable"/><br />
<qemu:arg value="file=/path/to/your/SSDT1.dat"/><br />
</qemu:commandline><br />
</domain><br />
</nowiki>}}<br />
<br />
Make sure your XML file has the correct namespace in the {{ic|<domain>}} tag as visible above, otherwise the XML verification will fail.<br />
<br />
[https://www.reddit.com/r/VFIO/comments/ebo2uk/nvidia_geforce_rtx_2060_mobile_success_qemu_ovmf/ Source]<br />
<br />
=== "BAR 3: cannot reserve [mem]" error in dmesg after starting virtual machine ===<br />
<br />
{{Expansion|This error is actually related to the boot_vgs issue and should be merged together with everything else concerning GPU ROMs.|section=UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
With respect to [https://www.linuxquestions.org/questions/linux-kernel-70/kernel-fails-to-assign-memory-to-pcie-device-4175487043/ this article]:<br />
<br />
If you still have code 43 check ''dmesg'' for memory reservation errors after starting up your virtual machine, if you have similar it could be the case:<br />
<br />
vfio-pci 0000:09:00.0: BAR 3: cannot reserve [mem 0xf0000000-0xf1ffffff 64bit pref]<br />
<br />
Find out a PCI Bridge your graphic card is connected to. This will give actual hierarchy of devices:<br />
<br />
$ lspci -t<br />
<br />
Before starting the virtual machine run the following lines, replacing IDs with actual values from previous output.<br />
<br />
# echo 1 > /sys/bus/pci/devices/0000\:00\:03.1/remove<br />
# echo 1 > /sys/bus/pci/rescan<br />
<br />
{{Note|Probably setting [[kernel parameter]] {{ic|1=video=efifb:off}} is required as well. [https://pve.proxmox.com/wiki/Pci_passthrough#BAR_3:_can.27t_reserve_.5Bmem.5D_error Source]}}<br />
<br />
In addition try adding kernel parameter {{ic|1=pci=realloc}} which also [https://github.com/Dunedan/mbp-2016-linux/issues/60#issuecomment-396311301 helps with hotplugging issues].<br />
<br />
=== UEFI (OVMF) compatibility in VBIOS ===<br />
<br />
{{Remove|Flashing you guest GPU for the purpose of a GPU passthrough is '''never''' good advice. A full section should be dedicated to VBIOS compatibility.|section= UEFI (OVMF) Compatibility in VBIOS}}<br />
<br />
With respect to [https://pve.proxmox.com/wiki/Pci_passthrough#How_to_known_if_card_is_UEFI_.28ovmf.29_compatible this article]:<br />
<br />
Error 43 can be caused by the GPU's VBIOS without UEFI support. To check whenever your VBIOS supports it, you will have to use {{ic|rom-parser}}:<br />
<br />
$ git clone https://github.com/awilliam/rom-parser<br />
$ cd rom-parser && make<br />
<br />
Dump the GPU VBIOS:<br />
<br />
# echo 1 > /sys/bus/pci/devices/0000:01:00.0/rom<br />
# cat /sys/bus/pci/devices/0000:01:00.0/rom > /tmp/image.rom<br />
# echo 0 > /sys/bus/pci/devices/0000:01:00.0/rom<br />
<br />
And test it for compatibility:<br />
<br />
{{hc|$ ./rom-parser /tmp/image.rom|<br />
Valid ROM signature found @600h, PCIR offset 190h<br />
PCIR: type 0 (x86 PC-AT), vendor: 10de, device: 1184, class: 030000<br />
PCIR: revision 0, vendor revision: 1<br />
Valid ROM signature found @fa00h, PCIR offset 1ch<br />
PCIR: type 3 (EFI), vendor: 10de, device: 1184, class: 030000<br />
PCIR: revision 3, vendor revision: 0<br />
EFI: Signature Valid, Subsystem: Boot, Machine: X64<br />
Last image<br />
}}<br />
<br />
To be UEFI compatible, you need a "type 3 (EFI)" in the result. If it is not there, try updating your GPU VBIOS. GPU manufacturers often share VBIOS upgrades on their support pages. A large database of known compatible and working VBIOSes (along with their UEFI compatibility status!) is available on [https://www.techpowerup.com/vgabios/ TechPowerUp].<br />
<br />
Updated VBIOS can be used in the virtual machine without flashing. To load it in QEMU:<br />
<br />
-device vfio-pci,host=07:00.0,......,romfile=/path/to/your/gpu/bios.bin \<br />
<br />
And in libvirt:<br />
<br />
{{bc|1=<br />
<hostdev><br />
...<br />
<rom file='/path/to/your/gpu/bios.bin'/><br />
...<br />
</hostdev><br />
}}<br />
<br />
One should compare VBIOS versions between host and guest systems using [https://www.techpowerup.com/download/nvidia-nvflash/ nvflash] (Linux versions under ''Show more versions'') or <br />
[https://www.techpowerup.com/download/techpowerup-gpu-z/ GPU-Z] (in Windows guest). To check the currently loaded VBIOS:<br />
<br />
{{hc|$ ./nvflash --version|<br />
...<br />
Version : 80.04.XX.00.97<br />
...<br />
UEFI Support : No<br />
UEFI Version : N/A<br />
UEFI Variant Id : N/A ( Unknown )<br />
UEFI Signer(s) : Unsigned<br />
...<br />
}}<br />
<br />
And to check a given VBIOS file:<br />
<br />
{{hc|$ ./nvflash --version NV299MH.rom|<br />
...<br />
Version : 80.04.XX.00.95<br />
...<br />
UEFI Support : Yes<br />
UEFI Version : 0x10022 (Jul 2 2013 @ 16377903 )<br />
UEFI Variant Id : 0x0000000000000004 ( GK1xx )<br />
UEFI Signer(s) : Microsoft Corporation UEFI CA 2011<br />
...<br />
}}<br />
<br />
If the external ROM did not work as it should in the guest, you will have to flash the newer VBIOS image to the GPU. In some cases it is possible to create your own VBIOS image with UEFI support using [https://www.win-raid.com/t892f16-AMD-and-Nvidia-GOP-update-No-requests-DIY.html GOPUpd] tool, however this is risky and may result in GPU brick.<br />
<br />
{{Warning|Failure during flashing may "brick" your GPU - recovery may be possible, but rarely easy and often requires additional hardware. '''DO NOT''' flash VBIOS images for other GPU models (different boards may use different VBIOSes, clocks, fan configuration). If it breaks, you get to keep all the pieces.}}<br />
<br />
In order to avoid the irreparable damage to your graphics adapter it is necessary to unload the NVIDIA kernel driver first:<br />
<br />
# modprobe -r nvidia_modeset nvidia <br />
<br />
Flashing the VBIOS can be done with:<br />
<br />
# ./nvflash romfile.bin<br />
<br />
{{Warning|'''DO NOT''' interrupt the flashing process, even if it looks like it is stuck. Flashing should take about a minute on most GPUs, but may take longer.}}<br />
<br />
=== Slowed down audio pumped through HDMI on the video card ===<br />
<br />
For some users, the virtual machine's audio slows down/starts stuttering/becomes demonic after a while when it is pumped through HDMI on the video card. This usually also slows down graphics.<br />
A possible solution consists of enabling MSI (Message Signaled-Based Interrupts) instead of the default (Line-Based Interrupts).<br />
<br />
In order to check whether MSI is supported or enabled, run the following command as root:<br />
<br />
# lspci -vs $device | grep 'MSI:'<br />
<br />
where `$device` is the card's address (e.g. `01:00.0`).<br />
<br />
The output should be similar to:<br />
<br />
Capabilities: [60] MSI: Enable'''-''' Count=1/1 Maskable- 64bit+<br />
<br />
A {{ic|-}} after {{ic|Enable}} means MSI is supported, but not used by the virtual machine, while a {{ic|+}} says that the virtual machine is using it.<br />
<br />
The procedure to enable it is quite complex, instructions and an overview of the setting can be found [https://forums.guru3d.com/showthread.php?t=378044 here].<br />
<br />
On a linux guest you can use modinfo to see if there is option to enable MSI (for example: "modinfo snd_hda_intel |grep msi"). If there is, one can enable it by adding the relevant option to a custom omdprobe file - in "/etc/modprobe.d/snd-hda-intel.conf" inserting "options snd-hda-intel enable_msi=1"<br />
<br />
Other hints can be found on the [https://lime-technology.com/wiki/index.php/UnRAID_6/VM_Guest_Support#Enable_MSI_for_Interrupts_to_Fix_HDMI_Audio_Support lime-technology's wiki], or on this article on [https://vfio.blogspot.it/2014/09/vfio-interrupts-and-how-to-coax-windows.html VFIO tips and tricks].<br />
<br />
A UI tool called [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts-msi-tool.378044/ MSI Utility (FOSS Version 2)] works with Windows 10 64-bit and simplifies the process.<br />
<br />
In order to fix the issues enabling MSI on the 0 function of a nVidia card ({{ic|01:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) (prog-if 00 [VGA controller])}}) was not enough; it will also be required to enable it on the other function ({{ic|01:00.1 Audio device: NVIDIA Corporation Device 0fba (rev a1)}}) to fix the issue.<br />
<br />
=== No HDMI audio output on host when intel_iommu is enabled ===<br />
<br />
If after enabling {{ic|intel_iommu}} the HDMI output device of Intel GPU becomes unusable on the host then setting the option {{ic|igfx_off}} (i.e. {{ic|1=intel_iommu=on,igfx_off}}) might bring the audio back, please read [https://www.kernel.org/doc/html/latest/x86/intel-iommu.html#graphics-problems intel-iommu.html] for details about setting {{ic|igfx_off}}.<br />
<br />
=== X does not start after enabling vfio_pci ===<br />
<br />
This is related to the host GPU being detected as a secondary GPU, which causes X to fail/crash when it tries to load a driver for the guest GPU. To circumvent this, a Xorg configuration file specifying the BusID for the host GPU is required. The correct BusID can be acquired from {{ic|lspci -n}} or the Xorg log [https://www.redhat.com/archives/vfio-users/2016-August/msg00025.html]. Note that the value from the ''lspci'' output is hexadecimal and should be converted to decimal in the ''.conf'' file.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/10-intel.conf|<br />
Section "Device"<br />
Identifier "Intel GPU"<br />
Driver "modesetting"<br />
BusID "PCI:0:2:0"<br />
EndSection<br />
}}<br />
<br />
=== Chromium ignores integrated graphics for acceleration ===<br />
<br />
Chromium and friends will try to detect as many GPUs as they can in the system and pick which one is preferred (usually discrete NVIDIA/AMD graphics). It tries to pick a GPU by looking at PCI devices, not OpenGL renderers available in the system - the result is that Chromium may ignore the integrated GPU available for rendering and try to use the dedicated GPU bound to the {{ic|vfio-pci}} driver, and unusable on the host system, regardless of whenever a guest virtual machine is running or not. This results in software rendering being used (leading to higher CPU load, which may also result in choppy video playback, scrolling and general un-smoothness).<br />
<br />
This can be fixed by [[Chromium/Tips and tricks#Forcing specific GPU|explicitly telling Chromium which GPU you want to use]].<br />
<br />
=== Virtual machine only uses one core ===<br />
<br />
For some users, even if IOMMU is enabled and the core count is set to more than 1, the virtual machine still only uses one CPU core and thread. To solve this enable "Manually set CPU topology" in {{ic|virt-manager}} and set it to the desirable amount of CPU sockets, cores and threads. Keep in mind that "Threads" refers to the thread count per CPU, not the total count.<br />
<br />
=== Passthrough seems to work but no output is displayed ===<br />
<br />
Make sure if you are using virt-manager that UEFI firmware is selected for your virtual machine. Also, make sure you have passed the correct device to the virtual machine.<br />
<br />
=== Host lockup after virtual machine shutdown ===<br />
<br />
This issue seems to primarily affect users running a Windows 10 guest and usually after the virtual machine has been run for a prolonged period of time: the host will experience multiple CPU core lockups (see [https://bbs.archlinux.org/viewtopic.php?id=206050&p=2]). To fix this try enabling Message Signal Interrupts on the GPU passed through to the guest. A good guide for how to do this can be found in [https://forums.guru3d.com/threads/windows-line-based-vs-message-signaled-based-interrupts.378044/]. You can also download this application for windows here [https://github.com/TechtonicSoftware/MSIInturruptEnabler] that should make the process easier.<br />
<br />
=== Host lockup if guest is left running during sleep ===<br />
<br />
VFIO-enabled virtual machines tend to become unstable if left running through a sleep/wakeup cycle and have been known to cause the host machine to lockup when an attempt is then made to shut them down. In order to avoid this, one can simply prevent the host from going into sleep while the guest is running using the following libvirt hook script and systemd unit. The hook file needs executable permissions to work.<br />
<br />
{{hc|/etc/libvirt/hooks/qemu|2=<br />
#!/bin/sh<br />
<br />
OBJECT="$1"<br />
OPERATION="$2"<br />
SUBOPERATION="$3"<br />
EXTRA_ARG="$4"<br />
<br />
case "$OPERATION" in<br />
"prepare")<br />
systemctl start libvirt-nosleep@"$OBJECT"<br />
;;<br />
"release")<br />
systemctl stop libvirt-nosleep@"$OBJECT"<br />
;;<br />
esac<br />
}}<br />
<br />
{{hc|/etc/systemd/system/libvirt-nosleep@.service|2=<br />
[Unit]<br />
Description=Preventing sleep while libvirt domain "%i" is running<br />
<br />
[Service]<br />
Type=simple<br />
ExecStart=/usr/bin/systemd-inhibit --what=sleep --why="Libvirt domain \"%i\" is running" --who=%U --mode=block sleep infinity<br />
}}<br />
<br />
=== Cannot boot after upgrading ovmf ===<br />
<br />
If you cannot boot after upgrading from {{Pkg|edk2-ovmf}} version 1:r23112.018432f0ce-1 then you need to remove the old {{ic|*VARS.fd}} file in {{ic|/var/lib/libvirt/qemu/nvram/}}:<br />
<br />
# mv /var/lib/libvirt/qemu/nvram/vmname_VARS.fd /var/lib/libvirt/qemu/nvram/vmname_VARS.fd.old<br />
<br />
See {{Bug|57825}} for further details.<br />
<br />
=== Bluescreen at boot since Windows 10 1803 ===<br />
<br />
Since Windows 10 1803 there is a problem when you are using "host-passthrough" as cpu model. The machine cannot boot and is either boot looping or you get a bluescreen.<br />
You can workaround this by:<br />
<br />
# echo 1 > /sys/module/kvm/parameters/ignore_msrs<br />
<br />
To make it permanently you can create a modprobe file {{ic|kvm.conf}}:<br />
<br />
options kvm ignore_msrs=1<br />
<br />
To prevent clogging up ''dmesg'' with "ignored rdmsr" messages you can additionally add:<br />
<br />
options kvm report_ignored_msrs=0<br />
<br />
=== AMD Ryzen / BIOS updates (AGESA) yields "Error: internal error: Unknown PCI header type ‘127’" ===<br />
<br />
AMD users have been experiencing breakage of their KVM setups after updating the BIOS on their motherboard. There is a kernel [https://clbin.com/VCiYJ patch], (see [[Kernel/Arch Build System]] for instruction on compiling kernels with custom patches) that can resolve the issue as of now (7/28/19), but this is not the first time AMD has made an error of this very nature, so take this into account if you are considering updating your BIOS in the future as a VFIO user.<br />
<br />
=== AMD GPU not resetting properly yielding "Error: internal error: Unknown PCI header type ‘127’" (Separate issue from the one above) ===<br />
<br />
Passing through an AMD GPU may result into a problem known as the "AMD reset bug". Upon power cycling the guest, the GPU does not properly reset its state which causes the device to malfunction until the host is also rebooted. This is usually paired with a "code 43" driver error in a Windows guest, and the message "Error: internal error: Unknown PCI header type '127'" in the libvirt log on the host.<br />
<br />
In the past, this meant having to use work-arounds to manually reset the GPU, or resorting to the use of kernel patches that were unlikely to land in upstream. Currently, the recommended solution that does not require patching of the kernel is to install {{AUR|vendor-reset-git}} or {{AUR|vendor-reset-dkms-git}} and making sure the 'vendor-reset' kernel module is loaded before booting the guest. For convenience, you can [[Kernel module#Automatic module loading with systemd|load the module automatically]].<br />
<br />
{{Note| Make sure you do not have any of the AMD reset bug kernel patches installed if you are using {{AUR|vendor-reset-git}} or {{AUR|vendor-reset-dkms-git}}.}}<br />
<br />
=== Host crashes when hotplugging Nvidia card with USB ===<br />
<br />
If attempting to hotplug an Nvidia card with a USB port, you may have to blacklist the {{ic|i2c_nvidia_gpu}} driver. Do this by adding the line {{ic|blacklist i2c_nvidia_gpu}} to {{ic|/etc/modprobe.d/blacklist.conf}}.<br />
<br />
=== Host unable to boot and stuck in black screen after enabling vfio ===<br />
<br />
If debug kernel messages during boot are enabled by adding with the {{ic|debug ignore_loglevel}} [[kernel parameters]], you may see boot stuck with the last message similar to:<br />
<br />
vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none<br />
<br />
This can be mitigated by disconnecting the passed-through GPU from your monitor. You may reconnect the passed-through GPU to a monitor after the host has booted.<br />
<br />
If you do not want to plug the cable in each time you boot the host. You can disable the framebuffer in your boot loader to bypass this message. For [[UEFI]] systems you can add {{ic|1=video=efifb:off}} as a kernel parameter. For legacy support, use {{ic|1=video=vesafb:off}} instead or in conjunction. Note that doing this may cause issues with [[Xorg]]. <br />
<br />
If you encounter problems with [[Xorg]], the following solution may help (remember to substitute with your own values if needed).<br />
<br />
{{hc|/etc/X11/xorg.conf.d/10-amd.conf|<br />
Section "Device"<br />
Identifier "AMD GPU"<br />
Driver "amdgpu"<br />
BusID "PCI:0:2:0"<br />
EndSection}}<br />
<br />
=== AER errors when passing through PCIe USB hub ===<br />
<br />
In some cases passing through a PCIe USB hub, such as one connected to the guest GPU, might fail with AER errors similar to the following:<br />
<br />
kernel: pcieport 0000:00:01.1: AER: Uncorrected (Non-Fatal) error received: 0000:00:01.1<br />
kernel: pcieport 0000:00:01.1: AER: PCIe Bus Error: severity=Uncorrected (Non-Fatal), type=Transaction Layer, (Requester ID)<br />
kernel: pcieport 0000:00:01.1: AER: device [8086:1905] error status/mask=00100000/00000000<br />
kernel: pcieport 0000:00:01.1: AER: [20] UnsupReq (First)<br />
kernel: pcieport 0000:00:01.1: AER: TLP Header: 00000000 00000000 00000000 00000000<br />
kernel: pcieport 0000:00:01.1: AER: device recovery successful<br />
<br />
=== Reserved Memory Region Reporting (RMRR) Conflict ===<br />
<br />
If you run into an issue passing through a device because of the BIOS's usage of RMRR, like the error below.<br />
<br />
vfio-pci 0000:01:00.1: Device is ineligible for IOMMU domain attach due to platform RMRR requirement. Contact your platform vendor.<br />
<br />
You can try the patches here: https://github.com/kiler129/relax-intel-rmrr<br />
<br />
=== Too-low frequency limit for AMD GPU passed-through to virtual machine ===<br />
<br />
On some machines with AMD GPUs, binding the devices to vfio-pci may be insufficient to prevent interference from the host, since the amdgpu driver on the host may query global ATIF methods which can alter the behavior of the GPU. For example, a user with a Dell Precision 7540 laptop containing a Radon Pro WX 3200 AMD GPU reported that, with the AMD GPU bound to vfio-pci, the passed-through AMD GPU was limited to 501 MHz instead of the correct 1295 MHz limit. [[Kernel_module#Blacklisting|Blacklisting]] the amdgpu kernel module using the kernel command line was a workaround.<br />
<br />
See [https://lore.kernel.org/regressions/092b825a-10ff-e197-18a1-d3e3a097b0e3@leemhuis.info/T/ this kernel mailing list discussion] for further details.<br />
<br />
== See also ==<br />
<br />
* [https://www.redhat.com/archives/vfio-users/ VFIO users mailing list]<br />
* [https://www.reddit.com/r/VFIO /r/VFIO: A subreddit focused on vfio]<br />
* [https://github.com/intel/gvt-linux/wiki/GVTd_Setup_Guide GVT-d: passthrough of an entire integrated GPU]</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731996QEMU/Guest graphics acceleration2022-06-08T10:08:15Z<p>FoXy: SR-IOV</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU].<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.<br />
<br />
==== SR-IOV ====<br />
Single Root I/O Virtualization is under development by Intel and Nvidia New GPU Series. There are some AMD GPU which supports this technology such as [https://forum.level1techs.com/t/how-to-sr-iov-mod-the-w7100-gpu W7100].</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731994QEMU/Guest graphics acceleration2022-06-08T09:27:26Z<p>FoXy: Added LIME</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
There is also LIME(LIME Is Mediated Emulation) for executing Windows Apps in Linux.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU].<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731992QEMU/Guest graphics acceleration2022-06-08T08:47:56Z<p>FoXy: More details</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving. you can use kvm switch to control desktops.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Virtualization Framework (Libvirt's Alternative) for Simplifying the GPU Virtualization. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming]. By default LibVF.IO uses Looking Glass as Virtual Display but you can change that in YAML Configuration.<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU].<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731988QEMU/Guest graphics acceleration2022-06-08T08:27:12Z<p>FoXy: add Nvidia VGPU</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Framework for Simplifying the Virtualization of GPU. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide]. you can also check their [https://openmdev.io/index.php/Articles WIKI].<br />
<br />
For NVIDIA GPU, you need to Unlock VGPU which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming].<br />
<br />
==== NVIDIA vGPU ====<br />
By default Nvidia disabled the vGPU for consumer series. however You can manually [https://github.com/DualCoder/vgpu_unlock Unlock VGPU].<br />
You will also need [https://www.nvidia.com/en-us/data-center/resources/vgpu-evaluation/ VGPU License], however there are some [https://github.com/DualCoder/vgpu_unlock/issues/94#issuecomment-1072870857 workarounds] out there.<br />
<br />
Follow [https://github.com/tuh8888/libvirt_win10_vm this guide] to manually<br />
setup a Windows 10 guest with Nvidia VGPU.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731986QEMU/Guest graphics acceleration2022-06-08T07:45:35Z<p>FoXy: Fix Formatting</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. <br />
<br />
When you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Framework for Simplifying the Virtualization of GPU. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide].<br />
<br />
For NVIDIA GPU, you need to [https://github.com/DualCoder/vgpu_unlock Unlock VGPU] which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming].</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731985QEMU/Guest graphics acceleration2022-06-08T07:41:21Z<p>FoXy: LibVFIO Gaming Note</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Framework for Simplifying the Virtualization of GPU. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide].<br />
For NVIDIA GPU, you need to [https://github.com/DualCoder/vgpu_unlock Unlock VGPU] which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.<br />
This framework was [https://www.youtube.com/watch?v=wqUjukaTqEg tested for Gaming].</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731982QEMU/Guest graphics acceleration2022-06-08T07:36:55Z<p>FoXy: Add GPU Virtualization Section and LIBVFIO</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
in case you have [[NVIDIA]] GPU, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].<br />
<br />
=== GPU Virtualization ===<br />
<br />
==== LIBVF.IO ====<br />
[https://github.com/Arc-Compute/libvf.io LibVF.IO] is a Framework for Simplifying the Virtualization of GPU. It Support Intel(Intel GVT-g, SR-IOV), Nvidia(Nvidia VGPU, SR-IOV), AMD(AMD SR-IOV).<br />
You have to create YAML Configurations for each VM. Currently Intel and NVIDIA GPUs are tested with limited support for AMD.<br />
You can follow this [https://arccompute.com/blog/libvfio-commodity-gpu-multiplexing/ setup guide].<br />
For NVIDIA GPU, you need to [https://github.com/DualCoder/vgpu_unlock Unlock VGPU] which can be done by installing {{AUR|nvidia-merged-dkms}} or [https://github.com/rupansh/vgpu_unlock_5.12#merged-driver-notes building it yourself] and putting it in LIBVF.IO's Optional Folder.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731970QEMU/Guest graphics acceleration2022-06-08T07:06:14Z<p>FoXy: Workaround for Nvidia Single GPU.</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/home)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.<br />
in case you have [[NVIDIA]] gpu, you may need to dump your GPU's vbios using {{AUR|nvflash}} and patch it using [https://github.com/Matoking/NVIDIA-vBIOS-VFIO-Patcher vBIOS Patcher].</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731967QEMU/Guest graphics acceleration2022-06-08T06:49:13Z<p>FoXy: Better Format</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/1)-Preparations workaround] for passing single graphic card. The problem with this approach is you have to deattach graphic-card from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731966QEMU/Guest graphics acceleration2022-06-08T06:42:51Z<p>FoXy: Fix Typo</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a workaround for passing single graphic card [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/1)-Preparations the workaround's tutorial.] The problem with this approach is you have to deattach graphics from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[Xpra]] to deattach to another Display before starting VM and reattach the Apps to display after shutting down VM.</div>FoXyhttps://wiki.archlinux.org/index.php?title=QEMU/Guest_graphics_acceleration&diff=731965QEMU/Guest graphics acceleration2022-06-08T06:38:19Z<p>FoXy: Adding Single GPU Section</p>
<hr />
<div>[[Category:Virtualization]]<br />
[[Category:Emulation]]<br />
{{Related articles start}}<br />
{{Related|PCI passthrough via OVMF}}<br />
{{Related|QEMU}}<br />
{{Related|KVM}}<br />
{{Related articles end}}<br />
There are multiple methods for virtual machine (VM) graphics display which yield greatly accelerated or near-bare metal performance.<br />
<br />
== Methods for QEMU guest graphics acceleration ==<br />
<br />
=== QXL video driver and SPICE client for display ===<br />
<br />
[[QEMU#qxl|QXL/SPICE]] is a high-performance display method. However, it is not designed to offer near-bare metal performance.<br />
<br />
=== PCI GPU passthrough ===<br />
<br />
==== PCI VGA/GPU passthrough via OVMF ====<br />
<br />
[[PCI_passthrough_via_OVMF|PCI passthrough]] currently seems to be the most popular method for optimal performance. [https://bbs.archlinux.org/viewtopic.php?id=162768&p=1 This forum thread] (now closed, and may be outdated) may be of interest for problem solving.<br />
<br />
==== Looking Glass ====<br />
<br />
There is a fairly recent passthrough method called [https://looking-glass.hostfission.com/ Looking Glass]. See [https://forum.level1techs.com/t/looking-glass-guides-help-and-support/122387 this guide to getting started] which provides some problem solving and user support. Looking Glass uses DXGI (MS DirectX Graphics Infrastructure) to pass complete frames captured from the VM's passed-through video card via shared memory to the host system where they are read (scraped) by a display client running on the bare-metal host.<br />
<br />
=== Fully virtualized GPU support via Intel-specific iGVT-g extension ===<br />
<br />
iGVT-g is limited to integrated Intel graphics on recent Intel CPUs (starting with 5th generation Intel Core(TM) processors). For more information, see [[Intel GVT-g]].<br />
<br />
=== Virgil3d virtio-gpu paravirtualized device driver ===<br />
<br />
[https://docs.mesa3d.org/drivers/virgl.html] virtio-gpu is a paravirtualized 3d accelerated graphics driver, similar to [[QEMU#Installing virtio drivers|non-graphics virtio drivers]] (see [https://www.linux-kvm.org/page/Virtio virtio driver information] and [https://www.linux-kvm.org/page/WindowsGuestDrivers/Download_Drivers virtio Windows guest drivers]). <br />
For Linux guests, [[QEMU#virtio|virtio-gpu]] is fairly mature, having been available since Linux kernel version 4.4 and QEMU version 2.6. See [https://www.reddit.com/r/archlinux/comments/7nmceg/kvmqemu_with_virtiogpu_virgl_support_enabled/ this Reddit Arch thread] and [https://www.kraxel.org/blog/2016/09/using-virtio-gpu-with-libvirt-and-spice/ Gerd Hoffmann's blog for using this with libvirt and spice].<br />
<br />
For Windows guests, there is very little information on [https://studiopixl.com/2017-08-27/3d-acceleration-using-virtio.html VirtIO-gpu OpenGL drivers] but there is [https://lists.freedesktop.org/archives/virglrenderer-devel/2021-January/001897.html a report that Red Hat abandoned work on it]. There is also [https://gist.github.com/Keenuts/199184f9a6d7a68d9a62cf0011147c0b a project summary], [https://gitlab.com/spice/win32/virtio-gpu-wddm-dod the DOD (Windows kernel) driver] and [https://github.com/Keenuts/virtio-gpu-win-icd the ICD (Windows userland) driver] are available. In addition, see [https://www.phoronix.com/scan.php?page=news_item&px=QEMU-3D-Windows-Guests this Phoronix article] and its comments.<br />
<br />
== Methods for Single GPU ==<br />
<br />
=== GPU Passthrough ===<br />
<br />
Currently [[PCI_passthrough_via_OVMF|PCI passthrough]] works for dual-graphic cards only. However there is a workaround for passing single graphic card [https://gitlab.com/risingprismtv/single-gpu-passthrough/-/wikis/1)-Preparations the workaround's tutorial.] The problem with this approach is you have to deattach graphics from the host and use ssh to control the host from the guest. also when you start the VM all your gui apps will be force terminated however as workaround you can use [[XPRA]] before starting VM and reattach the Apps to display after shutting down VM.</div>FoXyhttps://wiki.archlinux.org/index.php?title=Swap_on_video_RAM&diff=582855Swap on video RAM2019-09-18T19:13:57Z<p>FoXy: Remove warning due to new method</p>
<hr />
<div>[[Category:Graphics]]<br />
[[ja:ビデオメモリにスワップ]]<br />
{{Expansion|This article may need to be expanded or revised for contemporary hardware.}}<br />
{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related articles end}}<br />
Article on utilizing video memory for system swap.<br />
<br />
== Method One ==<br />
<br />
===Potential benefits===<br />
A graphics card with GDDR SDRAM or DDR SDRAM may be used as swap by using the MTD subsystem of the kernel. Systems with dedicated graphics memory of 256 MB or greater which also have limited amounts of system memory (DDR SDRAM) may benefit the most from this type of setup.<br />
<br />
{{Note|Using legacy AGP (Accelerated Graphics Port) card may limit reads to approximately 8 MB per second (but port speed is from 266MB/s to 2133MB/s so may it work fast). AGP bus has a limited amount of bus bandwidth.}}<br />
{{Warning|This will not work with binary drivers.}}<br />
{{Warning|Unless your graphics driver can be made to use less ram than is detected, Xorg may crash when you try to use the same section of RAM to store textures as swap. Using a video driver that allows you to override videoram should increase stability.}}<br />
<br />
===Kernel requirements===<br />
MTD is in the mainline kernel since version 2.6.23.<br />
<br />
===Pre-setup===<br />
When you are running a kernel with MTD modules, you have to load the modules specifying the pci address ranges that correspond to the ram on your video card.<br />
<br />
To find the available memory ranges run the following command and look for the VGA compatible controller section (see the example below).<br />
<br />
{{hc|$ lspci -vvv|<nowiki><br />
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 670] (rev a1) (prog-if 00 [VGA controller])<br />
Subsystem: ASUSTeK Computer Inc. Device 8405<br />
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-<br />
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-<br />
Latency: 0<br />
Interrupt: pin A routed to IRQ 57<br />
Region 0: Memory at f5000000 (32-bit, non-prefetchable) [size=16M]<br />
Region 1: Memory at e8000000 (64-bit, prefetchable) [size=128M]<br />
Region 3: Memory at f0000000 (64-bit, prefetchable) [size=32M]<br />
Region 5: I/O ports at e000 [size=128]<br />
[virtual] Expansion ROM at f6000000 [disabled] [size=512K]<br />
Capabilities: <access denied><br />
Kernel driver in use: nvidia<br />
Kernel modules: nouveau, nvidia</nowiki>}}<br />
<br />
{{Note|Systems with multiple GPUs will likely have multiple entries here.}}<br />
<br />
Of most potential benefit is a region that is prefetchable, 64-bit, and the largest in size.<br />
{{Note|The graphics card used above has 2 GB of GDDR5 SDRAM, though as indicated above the full amount is not exposed or listed by the command provided above.}}<br />
<br />
A video card needs some of its memory to function, as such some calculations are needed. The offsets are easy to calculate as powers of 2. The card should use the beginning of the address range as a framebuffer for textures and such. However, if limited or as indicated in the beginning of this article, if two programs try to write to the same sectors, stability issues are likely to occur.<br />
<br />
{{Warning|The following example is dated and may no longer be accurate.}}<br />
<br />
As an example: For a total of 256 MB of graphics memory, the forumla is 2^28 (two to the twenty-eighth power). Approximately 64 MB could be left for graphics memory and as such the start range for the swap usage of graphics memory would be calculated with the formula 2^26. <br />
<br />
Using the numbers above, you can take the difference and determine a reseasonable range for usage as swap memory.<br />
leaving 2^24 (32M) for the normal function (less will work fine)<br />
<br />
===Setup===<br />
Load the modules:<br />
{{hc|# /etc/modules-load.d/vramswap.conf|<nowiki><br />
slram<br />
mtdblock<br />
</nowiki>}} <br />
<br />
systemd service:<br />
{{hc|# /usr/lib/systemd/system/vramswap.service|<nowiki><br />
[Unit]<br />
Description=Swap on Video RAM<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/usr/bin/bash -c "mkswap /dev/mtdblock0 && swapon /dev/mtdblock0 -p 10"<br />
ExecStop=/usr/bin/bash -c "swapoff /dev/mtdblock0"<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Add the following.<br />
{{hc|# /etc/modprobe.d/modprobe.conf|<nowiki><br />
options slram map=VRAM,0xStartRange,+0xUsedAmount</nowiki>}}<br />
<br />
====Xorg driver config====<br />
To keep X stable, your video driver needs to be told to use less than the detected videoram.<br />
<br />
{{hc|# /etc/X11/xorg.conf.d/vramswap.conf|<br />
Section "Device"<br />
Driver "radeon" # or whichever other driver you use<br />
VideoRam 32768<br />
#other stuff<br />
EndSection}}<br />
The above example specifies that you use 32 MB of graphics memory.<br />
<br />
{{Note|Some drivers might take the number for videoram as being in MiB. See relevant manpages.}}<br />
<br />
===Troubleshooting===<br />
The following command may help you getting the used swap in the different spaces like disk partitions, flash disks and possibly this example of the swap on video ram<br />
<br />
{{bc|swapon -s}} <br />
<br />
===See also===<br />
* [http://www.linux-mtd.infradead.org MTD website]<br />
<br />
== Method Two ==<br />
=== Setup ===<br />
1.At the first step you have to install the {{AUR|vramfs-git}} package from AUR.<br />
<br />
2.Then you have to create an empty directory as mount point. e.g /tmp/vram.<br />
<br />
3.Now execute the following commands as sudo.<br />
<br />
<nowiki><br />
vramfs /tmp/vram 256MB -f<br />
dd if=/dev/zero of=/tmp/vram/swapfile bs=1M count=200<br />
chmod 600 /tmp/vram/swapfile<br />
mkswap /tmp/vram/swapfile<br />
swapon /tmp/vram/swapfile</nowiki><br />
<br />
4.Your swap now should be ready. Replace "256MB" and "200" with your own values.<br />
<br />
{{Note| You need to repeat the above commands after each reboot.}}<br />
{{Tip| You can use the /tmp/vram as temp storage.}}<br />
<br />
===See also===<br />
* [https://github.com/Overv/vramfs Github Repository]</div>FoXyhttps://wiki.archlinux.org/index.php?title=Swap_on_video_RAM&diff=582854Swap on video RAM2019-09-18T19:04:59Z<p>FoXy: Add new method.</p>
<hr />
<div>[[Category:Graphics]]<br />
[[ja:ビデオメモリにスワップ]]<br />
{{Expansion|This article may need to be expanded or revised for contemporary hardware.}}<br />
{{Out of date|Graphics hardware referenced is quite old at this point. This article primarily references a now archived article from Gentoo's wiki.}}<br />
{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related articles end}}<br />
Article on utilizing video memory for system swap.<br />
<br />
== Method One ==<br />
<br />
===Potential benefits===<br />
A graphics card with GDDR SDRAM or DDR SDRAM may be used as swap by using the MTD subsystem of the kernel. Systems with dedicated graphics memory of 256 MB or greater which also have limited amounts of system memory (DDR SDRAM) may benefit the most from this type of setup.<br />
<br />
{{Note|Using legacy AGP (Accelerated Graphics Port) card may limit reads to approximately 8 MB per second (but port speed is from 266MB/s to 2133MB/s so may it work fast). AGP bus has a limited amount of bus bandwidth.}}<br />
{{Warning|This will not work with binary drivers.}}<br />
{{Warning|Unless your graphics driver can be made to use less ram than is detected, Xorg may crash when you try to use the same section of RAM to store textures as swap. Using a video driver that allows you to override videoram should increase stability.}}<br />
<br />
===Kernel requirements===<br />
MTD is in the mainline kernel since version 2.6.23.<br />
<br />
===Pre-setup===<br />
When you are running a kernel with MTD modules, you have to load the modules specifying the pci address ranges that correspond to the ram on your video card.<br />
<br />
To find the available memory ranges run the following command and look for the VGA compatible controller section (see the example below).<br />
<br />
{{hc|$ lspci -vvv|<nowiki><br />
01:00.0 VGA compatible controller: NVIDIA Corporation GK104 [GeForce GTX 670] (rev a1) (prog-if 00 [VGA controller])<br />
Subsystem: ASUSTeK Computer Inc. Device 8405<br />
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-<br />
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-<br />
Latency: 0<br />
Interrupt: pin A routed to IRQ 57<br />
Region 0: Memory at f5000000 (32-bit, non-prefetchable) [size=16M]<br />
Region 1: Memory at e8000000 (64-bit, prefetchable) [size=128M]<br />
Region 3: Memory at f0000000 (64-bit, prefetchable) [size=32M]<br />
Region 5: I/O ports at e000 [size=128]<br />
[virtual] Expansion ROM at f6000000 [disabled] [size=512K]<br />
Capabilities: <access denied><br />
Kernel driver in use: nvidia<br />
Kernel modules: nouveau, nvidia</nowiki>}}<br />
<br />
{{Note|Systems with multiple GPUs will likely have multiple entries here.}}<br />
<br />
Of most potential benefit is a region that is prefetchable, 64-bit, and the largest in size.<br />
{{Note|The graphics card used above has 2 GB of GDDR5 SDRAM, though as indicated above the full amount is not exposed or listed by the command provided above.}}<br />
<br />
A video card needs some of its memory to function, as such some calculations are needed. The offsets are easy to calculate as powers of 2. The card should use the beginning of the address range as a framebuffer for textures and such. However, if limited or as indicated in the beginning of this article, if two programs try to write to the same sectors, stability issues are likely to occur.<br />
<br />
{{Warning|The following example is dated and may no longer be accurate.}}<br />
<br />
As an example: For a total of 256 MB of graphics memory, the forumla is 2^28 (two to the twenty-eighth power). Approximately 64 MB could be left for graphics memory and as such the start range for the swap usage of graphics memory would be calculated with the formula 2^26. <br />
<br />
Using the numbers above, you can take the difference and determine a reseasonable range for usage as swap memory.<br />
leaving 2^24 (32M) for the normal function (less will work fine)<br />
<br />
===Setup===<br />
Load the modules:<br />
{{hc|# /etc/modules-load.d/vramswap.conf|<nowiki><br />
slram<br />
mtdblock<br />
</nowiki>}} <br />
<br />
systemd service:<br />
{{hc|# /usr/lib/systemd/system/vramswap.service|<nowiki><br />
[Unit]<br />
Description=Swap on Video RAM<br />
<br />
[Service]<br />
Type=oneshot<br />
ExecStart=/usr/bin/bash -c "mkswap /dev/mtdblock0 && swapon /dev/mtdblock0 -p 10"<br />
ExecStop=/usr/bin/bash -c "swapoff /dev/mtdblock0"<br />
RemainAfterExit=yes<br />
<br />
[Install]<br />
WantedBy=multi-user.target<br />
</nowiki>}}<br />
<br />
Add the following.<br />
{{hc|# /etc/modprobe.d/modprobe.conf|<nowiki><br />
options slram map=VRAM,0xStartRange,+0xUsedAmount</nowiki>}}<br />
<br />
====Xorg driver config====<br />
To keep X stable, your video driver needs to be told to use less than the detected videoram.<br />
<br />
{{hc|# /etc/X11/xorg.conf.d/vramswap.conf|<br />
Section "Device"<br />
Driver "radeon" # or whichever other driver you use<br />
VideoRam 32768<br />
#other stuff<br />
EndSection}}<br />
The above example specifies that you use 32 MB of graphics memory.<br />
<br />
{{Note|Some drivers might take the number for videoram as being in MiB. See relevant manpages.}}<br />
<br />
===Troubleshooting===<br />
The following command may help you getting the used swap in the different spaces like disk partitions, flash disks and possibly this example of the swap on video ram<br />
<br />
{{bc|swapon -s}} <br />
<br />
===See also===<br />
* [http://www.linux-mtd.infradead.org MTD website]<br />
<br />
== Method Two ==<br />
=== Setup ===<br />
1.At the first step you have to install the {{AUR|vramfs-git}} package from AUR.<br />
<br />
2.Then you have to create an empty directory as mount point. e.g /tmp/vram.<br />
<br />
3.Now execute the following commands as sudo.<br />
<br />
<nowiki><br />
vramfs /tmp/vram 256MB -f<br />
dd if=/dev/zero of=/tmp/vram/swapfile bs=1M count=200<br />
chmod 600 /tmp/vram/swapfile<br />
mkswap /tmp/vram/swapfile<br />
swapon /tmp/vram/swapfile</nowiki><br />
<br />
4.Your swap now should be ready. Replace "256MB" and "200" with your own values.<br />
<br />
{{Note| You need to repeat the above commands after each reboot.}}<br />
{{Tip| You can use the /tmp/vram as temp storage.}}<br />
<br />
===See also===<br />
* [https://github.com/Overv/vramfs Github Repository]</div>FoXyhttps://wiki.archlinux.org/index.php?title=FUSE&diff=582810FUSE2019-09-18T13:03:05Z<p>FoXy: Updated List</p>
<hr />
<div>[[Category:FUSE]]<br />
[[es:FUSE]]<br />
{{Related articles start}}<br />
{{Related|File systems}}<br />
{{Related articles end}}<br />
[[Wikipedia:Filesystem in Userspace|Filesystem in Userspace]] (FUSE) is a mechanism for Unix-like operating systems that lets non-privileged users create their own file systems without editing kernel code. This is achieved by running file system code in ''user space'', while the FUSE kernel module provides only a "bridge" to the actual kernel interfaces.<br />
<br />
== Unmounting ==<br />
<br />
FUSE filesystems can be unmounted with:<br />
<br />
$ fusermount -u ''mountpoint''<br />
<br />
== List of FUSE filesystems ==<br />
<br />
* {{App|adbfs|Mount an Android device connected via USB.|http://collectskin.com/adbfs/|{{AUR|adbfs-git}}}}<br />
* {{App|apfs-fuse|FUSE driver for APFS (Apple File System).|https://github.com/sgan81/apfs-fuse|{{AUR|apfs-fuse-git}}}}<br />
* {{App|astreamfs|A(synchronous) Stream(ing) (fuse) F(ile)S(ystem).|https://gitlab.com/BylonAkila/astreamfs/tree/master|{{AUR|astreamfs-git}}}}<br />
* {{App|CloudFusion|Linux file system (FUSE) to access Dropbox, Sugarsync, Amazon S3, Google Drive or WebDAV servers.|https://joe42.github.io/CloudFusion/|{{AUR|cloudfusion-git}}}}<br />
* {{App|[[CurlFtpFS]]|Filesystem for accessing FTP hosts based on FUSE and libcurl.|http://curlftpfs.sourceforge.net/|{{Pkg|curlftpfs}}}}<br />
* {{App|[[davfs2]]|File system driver that allows you to mount a WebDAV folder.|https://savannah.nongnu.org/projects/davfs2|{{Pkg|davfs2}}}}<br />
* {{App|[[EncFS]]|Userspace stackable cryptographic file-system.|https://vgough.github.io/encfs/|{{Pkg|encfs}}}}<br />
* {{App|fuseiso|Mount an ISO as a regular user.|http://sourceforge.net/projects/fuseiso/|{{Pkg|fuseiso}}}}<br />
* {{App|GDriveFS|Innovative FUSE wrapper for Google Drive.|https://github.com/dsoprea/GDriveFS|{{AUR|gdrivefs}}}}<br />
* {{App|[[gitfs]]|gitfs is a FUSE file system that fully integrates with git.|https://www.presslabs.com/gitfs/|{{AUR|gitfs}}}}<br />
* {{App|[[gocryptfs]]|gocryptfs is a userspace stackable cryptographic file-system.|https://nuetzlich.net/gocryptfs/|{{Pkg|gocryptfs}}}}<br />
* {{App|google-drive-ocamlfuse|FUSE-based file system backed by Google Drive, written in OCaml.|https://astrada.github.io/google-drive-ocamlfuse/|{{AUR|google-drive-ocamlfuse}}}}<br />
* {{App|gphotofs|FUSE module to mount camera as a filesystem.|http://www.gphoto.org/proj/gphotofs/|{{AUR|gphotofs}}}}<br />
* {{App|HubicFuse|FUSE filesystem to access HubiC cloud storage.|https://github.com/TurboGit/hubicfuse|{{AUR|hubicfuse}}}}<br />
* {{App|MegaFuse|MEGA client for Linux, based on FUSE.|https://github.com/matteoserva/MegaFuse|{{AUR|megafuse-git}}}}<br />
* {{App|s3fs|FUSE-based file system backed by Amazon S3.|https://github.com/s3fs-fuse/s3fs-fuse|{{Pkg|s3fs-fuse}}}}<br />
* {{App|[[SSHFS]]|FUSE-based filesystem client for mounting directories over SSH.|https://github.com/libfuse/sshfs|{{Pkg|sshfs}}}}<br />
* {{App|TMSU|A command-line tool for tagging your files and accessing them through a virtual filesystem.|http://tmsu.org/|{{AUR|tmsu}}}}<br />
* {{App|vdfuse|Mounting VirtualBox disk images (VDI/VMDK/VHD).|https://github.com/muflone/virtualbox-includes|{{AUR|vdfuse}}}}<br />
* {{App|xbfuse|Mount an Xbox (360) ISO.|http://multimedia.cx/xbfuse/|{{AUR|xbfuse-git}}}}<br />
* {{App|xmlfs|Represent an XML file as a directory structure for easy access.|https://github.com/halhen/xmlfs|{{AUR|xmlfs}}}}<br />
* [[Media Transfer Protocol#FUSE filesystems]]<br />
<br />
== See also ==<br />
<br />
* [[Wikipedia:Filesystem in Userspace#Example uses]]</div>FoXyhttps://wiki.archlinux.org/index.php?title=Ryzen&diff=581002Ryzen2019-08-26T07:34:14Z<p>FoXy: Gaming Performance</p>
<hr />
<div>[[Category:CPU]]<br />
{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related|Improving performance/Boot process}}<br />
{{Related|Kernel}}<br />
{{Related|Microcode}}<br />
{{Related articles end}}<br />
<br />
Ryzen is a multithreaded, high performance processor released by AMD in Q1, 2017. It is the first CPU released based on the [[Wikipedia:Zen (microarchitecture)|Zen microarchitecture]]. Its goal is to directly compete with Intel's Broadwell-E processor line, primarily the Core i7-6900K.<br />
<br />
== Installation ==<br />
<br />
=== Kernels ===<br />
<br />
* [[Install]] the {{Pkg|linux-zen}} kernel for more optimisation. Linux ZEN provides better stability for any processors and also provides more speed in general (including gaming). It is '''only''' recommended for '''desktop''' users because the ZEN kernel uses as much power as the default kernel.<br />
* [[Install]] the {{AUR|linux-ck}} kernel which contains patches that is designed to improve system responsiveness with specific emphasis on the desktop, but suitable to any workload. The CK kernel is recommended for '''laptop''' users as it's intended to be very power efficient.<br />
{{Warning|{{AUR|linux-ck}} is not officially supported for Arch Linux and its derivatives. You may experience some issues.}}<br />
<br />
Reconfigure GRUB to use the kernel(s) you have installed so you can boot into it/them next time. If you do not use GRUB, you will have to create a configuration file to use the kernel(s) for your bootloader.<br />
<br />
{{Note|For more information about kernels, head on over the [[Kernel]] page.}}<br />
<br />
{{Tip|Have the {{Pkg|linux-lts}} package as a backup kernel in case a kernel upgrade breaks your system.}}<br />
<br />
=== Graphics Drivers ===<br />
<br />
[[Install]] the {{Pkg|mesa}} package which provides the [[Wikipedia:Direct Rendering Infrastructure|DRI]] driver for 3D acceleration (only for Ryzen APUs and/or AMD GPUs).<br />
<br />
=== Enable Microcode Support ===<br />
<br />
[[Install]] the {{Pkg|amd-ucode}} package to enable microcode updates and enable it with the help of the [[Microcode]] page. These updates provide bug fixes that can be critical to the stability of your system. It is '''highly recommended''' to use it despite it being proprietary.<br />
<br />
== Tweaking Ryzen ==<br />
<br />
=== Power Managing ===<br />
<br />
[https://github.com/FlyGoat/RyzenAdj RyzenAdj] (CLI) is a tool created by [https://github.com/FlyGoat FlyGoat] to adjust power management settings for Ryzen processors using a terminal emulator.<br />
<br />
{{Tip|You can use {{Pkg|lm_sensors}} to monitor the temperature of your processor.}}<br />
<br />
=== Overclocking ===<br />
<br />
[https://github.com/r4m0n/ZenStates-Linux/ ZenStates-Linux] (CLI) is a tool made by [https://github.com/r4m0n r4m0n] to adjust the clock speed and voltage. A detailed example was given in [https://forum.level1techs.com/t/overclock-your-ryzen-cpu-from-linux/126025 Level1Techs]' forums by ''catsay'' for you to understand it.<br />
<br />
== Improving Ryzen ==<br />
<br />
=== Enabling The Ananicy Daemon ===<br />
<br />
See [[Improving performance#Ananicy]].<br />
<br />
=== Irqbalance ===<br />
<br />
See [[Improving performance#irqbalance]].<br />
<br />
=== CPU Mitigations ===<br />
<br />
See [[Improving performance#Turn off CPU exploit mitigations]].<br />
<br />
=== Gaming Performance ===<br />
<br />
See [[Gaming#Improving performance]].<br />
<br />
== Compiling A Kernel ==<br />
<br />
See [[Gentoo:Ryzen#Kernel]] on enabling Ryzen support.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Screen-Tearing (APU) ===<br />
<br />
If you are using [[Xorg]] and are experiencing screen-tearing, enabling the {{ic|"TearFree"}} option will fix the problem.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-amdgpu.conf|<br />
Section "Device"<br />
Identifier "AMD"<br />
Driver "amdgpu"<br />
Option "TearFree" "true"<br />
EndSection<br />
}}<br />
<br />
{{Note| {{ic|"TearFree"}} is '''not''' Vsync.}}<br />
<br />
== See also ==<br />
<br />
* [[Gentoo:Ryzen]]</div>FoXyhttps://wiki.archlinux.org/index.php?title=Gaming&diff=581001Gaming2019-08-26T07:32:27Z<p>FoXy: GameMode youtube tutorial</p>
<hr />
<div>[[Category:Gaming]]<br />
[[da:List of games]]<br />
[[es:List of games]]<br />
[[it:List of games]]<br />
[[ja:ゲーム]]<br />
[[lt:Games]]<br />
[[ru:Gaming]]<br />
[[zh-hans:List of games]]<br />
{{Related articles start}}<br />
{{Related|List of games}}<br />
{{Related|Video game platform emulators}}<br />
{{Related|Xorg}}<br />
{{Related articles end}}<br />
<br />
This page contains information about running games and related system configuration tips.<br />
<br />
== Game environments ==<br />
<br />
Different environments exist to play games in Linux:<br />
<br />
* Native – games written for Linux.<br />
* Web – games running in a web browser.<br />
** HTML5 games use canvas and WebGL technologies and work in all modern browsers.<br />
** [[Flash]]-based – you need to install the plugin to play.<br />
* [[Video game platform emulators]] – required for running software designed for other architectures and systems.<br />
* [[Wine]] – Windows compatibility layer, allows to run Windows applications on Unix-like operating systems.<br />
* [[Virtual machine]]s – can be used to install compatible operating systems (such as Windows). [[VirtualBox]] has good 3D support. As an extension of this, if you have compatible hardware you can consider VGA passthrough to a Windows KVM guest, keyword is [https://www.kernel.org/doc/Documentation/vfio.txt "virtual function I/O" (VFIO)], or [[PCI passthrough via OVMF]].<br />
<br />
== Getting games ==<br />
<br />
Just because games are available for Linux doesn't mean that they are native, they might be pre-packaged with Wine or DOSBox.<br />
<br />
For list of games packaged for Arch in [[official repositories]] / the [[AUR]] see [[List of games]].<br />
<br />
* {{App|Flathub|Central [[Flatpak]] repository, has small but growing game section.|https://flathub.org/apps/category/Game|{{Pkg|flatpak}}, {{Pkg|discover}}, {{Pkg|gnome-software}}}}<br />
* {{App|[[Wikipedia:GOG.com|GOG.com]]|DRM-free game store.|https://www.gog.com|{{AUR|lgogdownloader}}}}<br />
* {{App|[[Wikipedia:itch.io|itch.io]]|Indie game store.|https://itch.io|{{AUR|itch}}}}<br />
* {{App|[[Wikipedia:Lutris|Lutris]]|Open gaming platform for Linux. Gets games from GOG, Steam, Battle.net, Origin, Uplay and many other sources. Lutris utilizes various [https://lutris.net/runners runners] to launch the games with fully customizable configuration options. |https://lutris.net|{{Pkg|lutris}}}}<br />
* {{App|[[Steam]]|Digital distribution and communications platform developed by Valve.|https://store.steampowered.com|{{Pkg|steam}}}}<br />
<br />
== Running games ==<br />
<br />
Certain games or game types may need special configuration to run or to run as expected.<br />
For the most part, games will work right out of the box in Arch Linux with possibly better performance than on other distributions due to compile time optimizations. However, some special setups may require a bit of configuration or scripting to make games run as smoothly as desired.<br />
<br />
=== Multi-screen setups ===<br />
<br />
Running a multi-screen setup may lead to problems with fullscreen games. In such a case, [[#Starting games in a separate X server|running a second X server]] is one possible solution. Another solution may be found in the [[NVIDIA#Gaming using TwinView|NVIDIA article]] (may also apply to non-NVIDIA users).<br />
<br />
=== Keyboard grabbing ===<br />
<br />
Many games grab the keyboard, noticeably preventing you from switching windows (also known as alt-tabbing).<br />
<br />
Some SDL games (e.g. Guacamelee) let you disable grabbing by pressing {{ic|Ctrl-g}}.<br />
<br />
{{Note|SDL is known to sometimes not be able to grab the input system. In such a case, it may succeed in grabbing it after a few seconds of waiting.}}<br />
<br />
=== Starting games in a separate X server ===<br />
<br />
In some cases like those mentioned above, it may be necessary or desired to run a second X server. Running a second X server has multiple advantages such as better performance, the ability to "tab" out of your game by using {{ic|Ctrl+Alt+F7}}/{{ic|Ctrl+Alt+F8}}, no crashing your primary X session (which may have open work on) in case a game conflicts with the graphics driver. The new X server will be akin a remote access login for the ALSA, so your user need to be part of the {{ic|audio}} group to be able to hear any sound.<br />
<br />
To start a second X server (using the free first person shooter game [http://www.xonotic.org/ Xonotic] as an example) you can simply do: <br />
$ xinit /usr/bin/xonotic-glx -- :1 vt$XDG_VTNR<br />
This can further be spiced up by using a separate X configuration file:<br />
$ xinit /usr/bin/xonotic-glx -- :1 -xf86config xorg-game.conf vt$XDG_VTNR<br />
A good reason to provide an alternative ''xorg.conf'' here may be that your primary configuration makes use of NVIDIA's Twinview which would render your 3D games like Xonotic in the middle of your multiscreen setup, spanned across all screens. This is undesirable, thus starting a second X with an alternative config where the second screen is disabled is advised.<br />
<br />
A game starting script making use of Openbox for your home directory or {{ic|/usr/local/bin}} may look like this:<br />
<br />
{{hc|~/game.sh|<nowiki><br />
if [ $# -ge 1 ]; then<br />
game="$(which $1)"<br />
openbox="$(which openbox)"<br />
tmpgame="/tmp/tmpgame.sh"<br />
DISPLAY=:1.0<br />
echo -e "${openbox} &\n${game}" > ${tmpgame}<br />
echo "starting ${game}"<br />
xinit ${tmpgame} -- :1 -xf86config xorg-game.conf || exit 1<br />
else<br />
echo "not a valid argument"<br />
fi<br />
</nowiki>}}<br />
<br />
So after a {{ic|chmod +x}} you would be able to use this script like:<br />
<br />
$ ~/game.sh xonotic-glx<br />
<br />
=== Adjusting mouse detections ===<br />
<br />
For games that require exceptional amount of mouse skill, adjusting the [[mouse polling rate]] can help improve accuracy.<br />
<br />
=== Binaural Audio with OpenAL ===<br />
<br />
For games using [[Wikipedia:OpenAL|OpenAL]], if you use headphones you may get much better positional audio using OpenAL's [[Wikipedia:Head-related transfer function|HRTF]] filters. To enable, run the following command:<br />
<br />
echo "hrtf = true" >> ~/.alsoftrc<br />
<br />
Alternatively, install {{AUR|openal-hrtf}} from the AUR, and edit the options in /etc/openal/alsoftrc.conf<br />
<br />
For Source games, the ingame setting `dsp_slow_cpu` must be set to `1` to enable HRTF, otherwise the game will enable its own processing instead. You will also either need to set up Steam to use native runtime, or link its copy of openal.so to your own local copy. For completeness, also use the following options:<br />
<br />
dsp_slow_cpu 1 # Disable in-game spatialiazation<br />
snd_spatialize_roundrobin 1 # Disable spatialization 1.0*100% of sounds<br />
dsp_enhance_stereo 0 # Disable DSP sound effects. You may want to leave this on, if you find it does not interfere with your perception of the sound effects.<br />
snd_pitchquality 1 # Use high quality sounds<br />
<br />
=== Tuning PulseAudio ===<br />
<br />
If you are using [[PulseAudio]], you may wish to tweak some default settings to make sure it is running optimally.<br />
<br />
==== Enabling realtime priority and negative nice level ====<br />
<br />
Pulseaudio is built to be run with realtime priority, being an audio daemon. However, because of security risks of it locking up the system, it is scheduled as a regular thread by default. To adjust this, first make sure you are in the {{ic|audio}} group. Then, uncomment and edit the following lines in {{ic|/etc/pulse/daemon.conf}}:<br />
<br />
{{hc|1=/etc/pulse/daemon.conf|2=<br />
high-priority = yes<br />
nice-level = -11<br />
<br />
realtime-scheduling = yes<br />
realtime-priority = 5}}<br />
<br />
and restart pulseaudio.<br />
<br />
==== Using higher quality remixing for better sound ====<br />
<br />
PulseAudio on Arch uses speex-float-0 by default to remix channels, which is considered a 'medium-low' quality remixing. If your system can handle the extra load, you may benefit from setting it to one of the following instead:<br />
<br />
resample-method = speex-float-10<br />
<br />
==== Matching hardware buffers to Pulse's buffering ====<br />
<br />
Matching the buffers can reduce stuttering and increase performance marginally. See [http://forums.linuxmint.com/viewtopic.php?f=42&t=44862 here] for more details.<br />
<br />
=== Double check your CPU frequency scaling settings ===<br />
<br />
If your system is currently configured to properly insert its own cpu frequency scaling driver, the system sets the default governor to Ondemand. By default, this governor only adjusts the clock if the system is utilizing 95% of its CPU, and then only for a very short period of time. This saves power and reduces heat, but has a noticeable impact on performance. You can instead only have the system downclock when it is idle, by tuning the system governor. To do so, see [[Cpufrequtils#Tuning the ondemand governor]].<br />
<br />
== Remote gaming ==<br />
<br />
[[Wikipedia:Cloud gaming|Cloud gaming]] has gained a lot of popularity in the last few years, because of low client-side hardware requirements. The only important thing is stable internet connection (over the ethernet cable or 5 GHz WiFi recommended) with a minimum speed of 5–10 Mbit/s (depending on the video quality and framerate).<br />
<br />
{{Note|Most of the services that work in browser usually mean to be only compatible with {{AUR|google-chrome}}.}}<br />
<br />
{| class="wikitable sortable" style="text-align: center;"<br />
! Service<br />
! class="unsortable" | Installer<br />
! In browser client<br />
! Use your own host<br />
! Offers host renting<br />
! Full desktop support<br />
! Controller support<br />
! class="unsortable" | Remarks<br />
|-<br />
| [https://dixper.gg/ Dixper] || {{-}} || {{Yes}} || {{Y|Windows-only}} || ? || ? || ? || {{-}}<br />
|-<br />
| [https://liquidsky.com/ LiquidSky] || {{AUR|liquidsky}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || {{Yes}} || {{-}}<br />
|-<br />
| [https://moonlight-stream.org/ Moonlight] || {{AUR|moonlight-qt}} || {{No}} || {{Y|Windows-only}} || {{No}} || {{Yes}} || {{Yes}} || This is only a client. Host machine needs GeForce experience installed.<br />
|-<br />
| [https://ui.parsecgaming.com/ Parsec] || {{AUR|parsec-bin}} || {{Yes}} (experimental) || {{Y|Windows-only}} || {{Yes}} || {{Yes}} || {{Yes}} || {{-}}<br />
|-<br />
| [https://playkey.net/ Playkey] || {{AUR|playkey-linux}} || ? || ? || ? || ? || ? || {{-}}<br />
|-<br />
| style="white-space:nowrap" | [https://www.playstation.com/en-gb/explore/playstation-now/ps-now-on-pc/ PlayStation Now] || Runs under [[Wine]] or [[Steam]]'s proton || {{No}} || {{No}} || {{-}} || {{No}} || {{Yes}} || Play PS4, PS3 and PS2 games on PC. Alternatively, you can use [[Video game platform emulators|emulators]].<br />
|-<br />
| [https://rainway.com/ Rainway] || Coming in 2019 Q3 || {{Yes}} || {{Y|Windows-only}} || {{No}} || {{Yes}} || ? || {{-}}<br />
|-<br />
| [https://shadow.tech/ Shadow] || {{AUR|shadow-beta}} || {{No}} || {{No}} || {{Yes}} || {{Yes}} || {{Yes}} || Controller support is dependent on USB over IP, and currently AVC only as HEVC isn't supported<br />
|-<br />
| [[Steam#Steam_Remote_Play|Steam Remote Play]] || Part of {{pkg|steam}} || {{No}} || {{Yes}} || {{No}} || {{No}} || {{Yes}} || {{-}}<br />
|-<br />
| [https://vortex.gg/ Vortex] || {{-}} || {{Yes}} || {{No}} || {{-}} || {{No}} || ? || {{-}}<br />
|}<br />
<br />
== Improving performance ==<br />
<br />
See also main article: [[Improving performance]]. For Wine programs, see [[Wine#Performance]].<br />
<br />
=== Utilities ===<br />
<br />
* {{App|GameMode|Daemon/lib combo for Linux that allows games to request a set of optimisations be temporarily applied to the host OS.|https://github.com/FeralInteractive/gamemode|{{AUR|gamemode}}, {{AUR|lib32-gamemode}}}}<br />
<br />
{{Note|There is also a tutorial on [https://youtu.be/4gyRyYfyGJw YouTube] that you can follow.}}<br />
<br />
=== Improving frame rates and responsiveness with scheduling policies ===<br />
<br />
Most games can benefit if given the correct scheduling policies for the kernel to prioritize the task. These policies should ideally be set per-thread by the application itself.<br />
<br />
For programs which do not implement scheduling policies on their own, application known as {{Pkg|schedtool}}, and its associated daemon {{AUR|schedtoold}} can handle many of these tasks automatically.<br />
<br />
To edit what programs relieve what policies, simply edit {{ic|/etc/schedtoold.conf}} and add the program followed by the ''schedtool'' arguments desired.<br />
<br />
==== Policies ====<br />
<br />
{{ic|SCHED_ISO}} (only implemented in BFS/MuQSSPDS schedulers found in -pf and -ck [[kernel]]s) – will not only allow the process to use a maximum of 80 percent of the CPU, but will attempt to reduce latency and stuttering wherever possible. Most if not all games will benefit from this:<br />
<br />
bit.trip.runner -I<br />
<br />
{{ic|SCHED_FIFO}} provides an alternative, that can even work better. You should test to see if your applications run more smoothly with {{ic|SCHED_FIFO}}, in which case by all means use it instead. Be warned though, as {{ic|SCHED_FIFO}} runs the risk of starving the system! Use this in cases where -I is used below:<br />
<br />
bit.trip.runner -F -p 15<br />
<br />
==== Nice levels ====<br />
<br />
Secondly, the nice level sets which tasks are processed first, in ascending order. A nice level of -4 is recommended for most multimedia tasks, including games:<br />
<br />
bit.trip.runner -n -4<br />
<br />
==== Core affinity ====<br />
<br />
There is some confusion in development as to whether the driver should be multithreading, or the program. Allowing both the driver and program to simultaneously multithread can result in significant performance reductions, such as framerate loss and increased risk of crashes. Examples of this include a number of modern games, and any Wine program which is running with [[Wikipedia:OpenGL Shading Language|GLSL]] enabled. To select a single core and allow only the driver to handle this process, simply use the {{ic|-a 0x''#''}} flag, where ''#'' is the core number, e.g.:<br />
<br />
bit.trip.runner -a 0x1<br />
<br />
uses first core.<br />
<br />
Some CPUs are hyperthreaded and have only 2 or 4 cores but show up as 4 or 8, and are best accounted for:<br />
<br />
bit.trip.runner -a 0x5<br />
<br />
which use virtual cores 0101, or 1 and 3.<br />
<br />
==== General case ====<br />
<br />
For most games which require high framerates and low latency, usage of all of these flags seems to work best. Affinity should be checked per-program, however, as most native games can understand the correct usage.<br />
For a general case:<br />
<br />
bit.trip.runner -I -n -4<br />
Amnesia.bin64 -I -n -4<br />
hl2.exe -I -n -4 -a 0x1 #Wine with GLSL enabled<br />
<br />
etc.<br />
<br />
==== Optimus, and other helping programs ====<br />
<br />
As a general rule, any other process which the game requires to operate should be reniced to a level above that of the game itself. Strangely, Wine has a problem known as ''reverse scheduling'', it can often have benefits when the more important processes are set to a higher nice level. Wineserver also seems unconditionally to benefit from {{ic|SCHED_FIFO}}, since it rarely consumes the whole CPU and needs higher prioritization when possible.<br />
<br />
optirun -I -n -5<br />
wineserver -F -p 20 -n 19<br />
steam.exe -I -n -5<br />
<br />
== Gaming mouse ==<br />
If you are using a gaming mouse (especially Logitech and Steelseries), you may want configure your mouse such as DPI, LED... using {{Pkg|piper}}. See [https://github.com/libratbag/libratbag/tree/master/data/devices this page] for a full list of supported devices.</div>FoXyhttps://wiki.archlinux.org/index.php?title=Ryzen&diff=580561Ryzen2019-08-20T07:28:35Z<p>FoXy: CPU Mitigations</p>
<hr />
<div>{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related|Improving performance/Boot process}}<br />
{{Related|Kernel}}<br />
{{Related|Microcode}}<br />
[https://wiki.gentoo.org/wiki/Ryzen#Kernel Gentoo/Ryzen]<br />
{{Related articles end}}<br />
<br />
Ryzen is a multithreaded, high performance processor released by AMD in Q1, 2017. It is the first CPU released based on the [[Wikipedia:Zen (microarchitecture)|Zen microarchitecture]]. Its goal is to directly compete with Intel's Broadwell-E processor line, primarily the Core i7-6900K.<br />
<br />
== Installation ==<br />
<br />
=== Kernels ===<br />
<br />
*[[Install]] the {{Pkg|linux-zen}} kernel for more optimisation. Linux ZEN provides better stability for any processors and also provides more speed in general (including gaming). It is '''only''' recommended for '''desktop''' users because the ZEN kernel uses as much power as the default kernel.<br />
<br />
*[[Install]] the {{AUR|linux-ck}} kernel which contains patches that is designed to improve system responsiveness with specific emphasis on the desktop, but suitable to any workload. The CK kernel is recommended for '''laptop''' users as it's intended to be very power efficient.<br />
{{Warning|{{AUR|linux-ck}} isn't officially supported for Arch Linux and its derivatives, but works well.}}<br />
<br />
Reconfigure GRUB to use the kernel(s) you have installed so you can boot into it/them next time. If you do not use GRUB, you will have to create a configuration file to use the kernel(s) for your bootloader.<br />
<br />
{{Note|For more information about kernels, head on over the [[Kernel]] page.}}<br />
{{Tip|Have the {{Pkg|linux-lts}} package as a backup kernel in case a kernel upgrade breaks your system.}}<br />
<br />
=== Graphics Drivers ===<br />
<br />
[[Install]] the {{Pkg|mesa}} package which provides the [[wikipedia:Direct_Rendering_Infrastructure|DRI]] driver for 3D acceleration (only for Ryzen APUs and/or AMD GPUs).<br />
<br />
=== Enable Microcode Support ===<br />
<br />
[[Install]] the {{Pkg|amd-ucode}} package to enable microcode updates and enable it with the help of the [[Microcode]] page. These updates provide bug fixes that can be critical to the stability of your system. It is '''highly recommended''' to use it despite it being proprietary.<br />
<br />
== Tweaking Ryzen ==<br />
<br />
=== Power Managing ===<br />
<br />
[https://github.com/FlyGoat/RyzenAdj RyzenAdj] (CLI) is a tool created by [https://github.com/FlyGoat FlyGoat] to adjust power management settings for Ryzen processors using a terminal emulator.<br />
<br />
{{Tip|You can use {{Pkg|lm_sensors}} to monitor the temperature of your processor.}}<br />
<br />
=== Overclocking ===<br />
<br />
[https://github.com/r4m0n/ZenStates-Linux/ ZenStates-Linux] (CLI) is a tool made by [https://github.com/r4m0n r4m0n] to adjust the clock speed and voltage. A detailed example was given in [https://forum.level1techs.com/t/overclock-your-ryzen-cpu-from-linux/126025 Level1Techs]' forums by ''catsay'' for you to understand it.<br />
<br />
=== Sensors ===<br />
By default {{Pkg|lm_sensors}} doesn't support Ryzen series and you need to load {{AUR|it87-dkms-git}} kernel module.<br />
<br />
{{Tip|You can create a config in '/etc/modules-load.d' to load the module at startup.}}<br />
<br />
== Improving Ryzen ==<br />
<br />
=== Enabling The Ananicy Daemon ===<br />
<br />
See [[Improving_performance#Ananicy]].<br />
<br />
=== Irqbalance ===<br />
<br />
See [[Improving_performance#irqbalance]].<br />
<br />
=== CPU Mitigations ===<br />
<br />
See [[Improving_performance#Turn_off_CPU_exploit_mitigations]].<br />
<br />
=== GameMode ===<br />
<br />
{{AUR|gamemode}} is a daemon/lib combo for Linux made by [https://github.com/FeralInteractive/ FeralInteractive] that allows games to request a set of optimisations be temporarily applied to the host OS and/or a game process. You can either build it yourself from [https://github.com/FeralInteractive/gamemode GitHub] or you can install it from the AUR.<br />
{{Note|There is also a tutorial on [https://youtu.be/4gyRyYfyGJw YouTube] that you can follow.}}<br />
<br />
== Compiling A Kernel ==<br />
<br />
See [https://wiki.gentoo.org/wiki/Ryzen#Kernel Gentoo]'s wiki on enabling Ryzen support.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Screen-Tearing (APU) ===<br />
<br />
If you are using [[Xorg|Xorg]] and are experiencing screen-tearing, enabling the {{ic|"TearFree"}} option will fix the problem.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-amdgpu.conf|<br />
Section "Device"<br />
Identifier "AMD"<br />
Driver "amdgpu"<br />
Option "TearFree" "true"<br />
EndSection<br />
}}<br />
<br />
{{Note| {{ic|"TearFree"}} is '''not''' Vsync.}}</div>FoXyhttps://wiki.archlinux.org/index.php?title=Ryzen&diff=580560Ryzen2019-08-20T07:19:43Z<p>FoXy: lm-sensors support</p>
<hr />
<div>{{Related articles start}}<br />
{{Related|Improving performance}}<br />
{{Related|Improving performance/Boot process}}<br />
{{Related|Kernel}}<br />
{{Related|Microcode}}<br />
[https://wiki.gentoo.org/wiki/Ryzen#Kernel Gentoo/Ryzen]<br />
{{Related articles end}}<br />
<br />
Ryzen is a multithreaded, high performance processor released by AMD in Q1, 2017. It is the first CPU released based on the [[Wikipedia:Zen (microarchitecture)|Zen microarchitecture]]. Its goal is to directly compete with Intel's Broadwell-E processor line, primarily the Core i7-6900K.<br />
<br />
== Installation ==<br />
<br />
=== Kernels ===<br />
<br />
*[[Install]] the {{Pkg|linux-zen}} kernel for more optimisation. Linux ZEN provides better stability for any processors and also provides more speed in general (including gaming). It is '''only''' recommended for '''desktop''' users because the ZEN kernel uses as much power as the default kernel.<br />
<br />
*[[Install]] the {{AUR|linux-ck}} kernel which contains patches that is designed to improve system responsiveness with specific emphasis on the desktop, but suitable to any workload. The CK kernel is recommended for '''laptop''' users as it's intended to be very power efficient.<br />
{{Warning|{{AUR|linux-ck}} isn't officially supported for Arch Linux and its derivatives, but works well.}}<br />
<br />
Reconfigure GRUB to use the kernel(s) you have installed so you can boot into it/them next time. If you do not use GRUB, you will have to create a configuration file to use the kernel(s) for your bootloader.<br />
<br />
{{Note|For more information about kernels, head on over the [[Kernel]] page.}}<br />
{{Tip|Have the {{Pkg|linux-lts}} package as a backup kernel in case a kernel upgrade breaks your system.}}<br />
<br />
=== Graphics Drivers ===<br />
<br />
[[Install]] the {{Pkg|mesa}} package which provides the [[wikipedia:Direct_Rendering_Infrastructure|DRI]] driver for 3D acceleration (only for Ryzen APUs and/or AMD GPUs).<br />
<br />
=== Enable Microcode Support ===<br />
<br />
[[Install]] the {{Pkg|amd-ucode}} package to enable microcode updates and enable it with the help of the [[Microcode]] page. These updates provide bug fixes that can be critical to the stability of your system. It is '''highly recommended''' to use it despite it being proprietary.<br />
<br />
== Tweaking Ryzen ==<br />
<br />
=== Power Managing ===<br />
<br />
[https://github.com/FlyGoat/RyzenAdj RyzenAdj] (CLI) is a tool created by [https://github.com/FlyGoat FlyGoat] to adjust power management settings for Ryzen processors using a terminal emulator.<br />
<br />
{{Tip|You can use {{Pkg|lm_sensors}} to monitor the temperature of your processor.}}<br />
<br />
=== Overclocking ===<br />
<br />
[https://github.com/r4m0n/ZenStates-Linux/ ZenStates-Linux] (CLI) is a tool made by [https://github.com/r4m0n r4m0n] to adjust the clock speed and voltage. A detailed example was given in [https://forum.level1techs.com/t/overclock-your-ryzen-cpu-from-linux/126025 Level1Techs]' forums by ''catsay'' for you to understand it.<br />
<br />
=== Sensors ===<br />
By default {{Pkg|lm_sensors}} doesn't support Ryzen series and you need to load {{AUR|it87-dkms-git}} kernel module.<br />
<br />
{{Tip|You can create a config in '/etc/modules-load.d' to load the module at startup.}}<br />
<br />
== Improving Ryzen ==<br />
<br />
=== Enabling The Ananicy Daemon ===<br />
<br />
See [[Improving_performance#Ananicy]].<br />
<br />
=== Irqbalance ===<br />
<br />
See [[Improving_performance#irqbalance]].<br />
<br />
=== GameMode ===<br />
<br />
{{AUR|gamemode}} is a daemon/lib combo for Linux made by [https://github.com/FeralInteractive/ FeralInteractive] that allows games to request a set of optimisations be temporarily applied to the host OS and/or a game process. You can either build it yourself from [https://github.com/FeralInteractive/gamemode GitHub] or you can install it from the AUR.<br />
{{Note|There is also a tutorial on [https://youtu.be/4gyRyYfyGJw YouTube] that you can follow.}}<br />
<br />
== Compiling A Kernel ==<br />
<br />
See [https://wiki.gentoo.org/wiki/Ryzen#Kernel Gentoo]'s wiki on enabling Ryzen support.<br />
<br />
== Troubleshooting ==<br />
<br />
=== Screen-Tearing (APU) ===<br />
<br />
If you are using [[Xorg|Xorg]] and are experiencing screen-tearing, enabling the {{ic|"TearFree"}} option will fix the problem.<br />
<br />
{{hc|/etc/X11/xorg.conf.d/20-amdgpu.conf|<br />
Section "Device"<br />
Identifier "AMD"<br />
Driver "amdgpu"<br />
Option "TearFree" "true"<br />
EndSection<br />
}}<br />
<br />
{{Note| {{ic|"TearFree"}} is '''not''' Vsync.}}</div>FoXy