Avoid screen tearing
Some advice to fight tearing likely turns out bad for most users:
- ForceFullCompositionPipeline introduces additional input latency and will cause new issues with certain applications.
- Setting triple buffering in the xorg.conf doesn't "enable" triple buffering, but also forces it for every application, even if it wants just double buffering (breaks Vsync of Firefox OpenGL).
- Setting triple buffering (no matter if in xorg.conf or KDE Plasma config) introduces stuttering in even most lightweight games such as Counter-Strike 1.6.
-> the only method that doesn't come with nasty side effects seems to be the environment variable export __GL_YIELD="USLEEP" on KDE Plasma. I have benchmarked several games on a GTX 1070 and there was no real performance impact by it. It wouldn't really matter anyway since compositing should be turned off for gaming (it is turned off by most games automatically). Since it can be set up in a config file of Plasma, I think we should just link to this: https://wiki.archlinux.org/index.php/KDE#Screen_tearing_with_Nvidia
- I'm not sure if it should be removed entirely, but it definitely demands some investigation. I see a lot of stuff on the web about
ForceCompositionPipeline(no "Full") fixing tearing issues, with no mention of forcing triple buffering. There seem to be a lot of anecdotes from people saying it helps. So the effects may be specific to different setups.
- Silverhammermba (talk) 15:02, 10 May 2017 (UTC)
ForceCompositionPipelinedefinitely fixes the tearing for good, but probably many users simply don't notice the drawbacks it brings. For example the increased latency even affects the mouse cursor and stuttering issues may only occur occasionally or briefly. If I'm not mistaken, X also doesn't run with vsync by definition and the vsync should be done by the compositor. This works ootb with Gnome Mutter, but on Nvidia only with bad performance (and from my experience, not any of the tweaks help against that).
- Good for testing is the website vsynctester. As soon as I set up triple buffering for either the driver, KWin or both, I'm not able anymore to get clean vsync in Firefox with OpenGL rendering, there is occasional stutter. There is no stutter with either no compositor and rely solely on Firefox own OGL vsync, or set up KWin compositing with export __GL_YIELD="USLEEP" on top of it to have a smooth result and window compositing at the same time.
- I've tested this with a GTX 1070 and different drivers in detail, but a GTX 780 Ti didn't seem to behave differently.
- Aufkrawall (talk) 17:50, 10 May 2017 (UTC)
- I can not confirm that export __GL_YIELD="USLEEP" has "no real performance impact".
- I talked to the devs at Feral Interactive and they pointed out that this setting sends the OGL threads to sleep and causes less performance in games like e.g. Deus Ex Mankind Divided.
- Don't know which games you tested, but every game that I tested and that needs a bit GPU power runs worse with this setting.
- Adequate (talk) 12:58, 11 May 2017 (UTC)
- It costs ~5% performance in CPU bound scenes here.
- Hitman 720p max details:
- default: https://abload.de/img/nosleep9haw7.png
- export __GL_YIELD="USLEEP": https://abload.de/img/sleepweypk.png
- Unigine Valley 720p max details 1xAA:
- default: https://abload.de/img/screenshot_20170511_1b7b48.png
- export __GL_YIELD="USLEEP": https://abload.de/img/screenshot_20170511_1yead4.png
- That 5% penalty is surely not higher than that of ForceFullCompositionPipeline and it's by far the better deal than occasional stutter caused by broken/forced triple buffering.
- Aufkrawall (talk) 16:25, 11 May 2017 (UTC)
- On my 980ti, I didn't notice a performance difference in raw FPS when changing from ForceFullCompositionPipeline to USLEEP (it might have been 1 or 2fps difference), however USLEEP gives a much smoother experience. ForceFullCompositionPipeline gave me an odd effect where games felt strangely jerky even at 60fps (with vsync on, on a 60hz display), I'm guessing it's to do with the way the FPS is limited and certain rendered frames are skipped. This is a complete conjecture, but it felt like it would render 60fps in under a second, hit the FPS cap, then wait until the next second, then render another 60fps and wait. USLEEP gave a much much smoother feeling game while still hitting FPS.
- I'm afraid I can't give a technical explanation but USLEEP is a much more pleasant gaming experience. On recent nvidia drivers and for gaming, USLEEP was definitely the better option.
- —This unsigned comment is by Tom B (talk) 21:25, 2 November 2017. Please sign your posts with ~~~~!
When making this change, TripleBuffering should be enabled and AllowIndirectGLXProtocol should be disabled in the driver configuration as well.
Could I get an exmplanation for this (perhaps an authoritative source)? Because I can't find one. I'm using ForceCompositionPipeline without TripleBuffering and it works. --Cr0w (talk) 17:46, 17 September 2017 (UTC)
Avoid screen tearing, other disadvantages
I'd like to introduce another disadvantage about "ForceFullCompositionPipeline=on": It heavily prolongs the time the driver takes to drop the GPU back to lower clocks after they are boosted by application activity. With this setting, it takes up to one minute to clock down again, with the setting disabled it only takes about 15 seconds. With the environment-variable "export __GL_ExperimentalPerfStrategy=1" it only takes 5 seconds.
Sluggish slow motion
Switching from i915 to nvidia-lts caused display slow-motion for me. Mouse pointer was still fast, but hovering over Gnome buttons took too long to display and clicking took long to register.
This was solved after switching BIOS to display on the dedicated GPU instead of the integrated one. The i915 driver is still loaded and required though. Modesetting did not make a difference.