Talk:NVIDIA Optimus

From ArchWiki
Latest comment: 13 March by Flemingalexander in topic Use NVIDIA graphics only is a viable option

Remove the libglamoregl.so is necessary

The NVIDIA's official page gives a caveat.[1]

Some versions of the “modesetting” driver try to load a sub-module called “glamor”, which conflicts with the NVIDIA GLX implementation. Please ensure that the libglamoregl.so X module is not installed.

Only if I rename or remove the /lib/xorg/modules/libglamoregl.so can I get the Optimus work. Disabling it in xorg.conf takes no effect. Constroy (talk) 05:33, 29 March 2016 (UTC)Reply[reply]

Well, if none of the methods described in the first note of NVIDIA_Optimus#Using_nvidia work, it might be necessary to re-open the mentioned FS#43830 or even open a new one upstream.[2] See the comments by lordheavy in the bug; deleting the libraries is the worst case option. --Indigo (talk) 13:19, 29 March 2016 (UTC)Reply[reply]

Clarify Xorg section

The https://wiki.archlinux.org/index.php/NVIDIA_Optimus#Using_nvidia discusses Xorg configutation, which I think is confusing at the moment. I think I can explain it like so:

  • Is Xorg above 1.17.2?
    • Yes: use the short configuration shown, which only presents a section for nvidia
    • No: use the longer configuration which contains a section for both Device and Screen for both nvidia and intel
  • Did you run into issues after Xorg 1.17.1?
    • No: obviously irrelevant
    • Yes: modify the Device section for intel with the shown code.

My issue is that if one is on > 1.17.2 or above, there is no intel section. I have no personal experience with this, but upon reading it I found it confusing. Either one needs an intel section or they don't, at least as it currently reads. Perhaps the instructions mean "if you have problems, please add this section to your xorg.conf"? Or perhaps they mean, "if you have problems, please use the longer xorg.conf containing both nvidia and intel sections, replacing the following for the intel section. In either case (or some new case I don't know), the current instructions don't seem to reconcile the versions of the xorg.conf shown. Jwhendy (talk) 19:39, 4 May 2016 (UTC)Reply[reply]

Detecting Optimus

Then how do you detect Optimus, Lahwaacz?

Ewtoombs (talk) 20:39, 13 October 2016 (UTC)Reply[reply]

You have to find information specific to your laptop, like this. There is no point in providing wrong information on the wiki. -- Lahwaacz (talk) 06:36, 14 October 2016 (UTC)Reply[reply]
The second version I provided wasn't wrong. It was actually quite useful and completely accurate. It really is impossible that you have Optimus if you don't have two GPUs, one of which is an NVIDIA. You're just being obstinate. Ewtoombs (talk) 13:40, 14 October 2016 (UTC)Reply[reply]
The necessary conditions for having Optimus are listed at the very top of the page, in the first sentence. The only thing you've provided on top of that is lots of mights and maybes, along with a completely unrelated info regarding problems with hardware rendering. That's hardly a case of factual accuracy. -- Lahwaacz (talk) 14:21, 14 October 2016 (UTC)Reply[reply]

Enough information may be at the top of the page, but a sample lspci of a system with Optimus is still helpful for comparison. Find me a single false positive with the same lspci. You can't. Nobody can. You're still being obstinate. Ewtoombs (talk) 20:23, 13 November 2017 (UTC)Reply[reply]

Clarify intro

I'm returning to this article after running into some snags with my setup and *still* find it confusing. The intro seems unclear. For example, the summary here:

NVIDIA Optimus is a technology that allows an Intel integrated GPU and discrete NVIDIA GPU to be built into and accessed by a laptop.

From wikipedia:

Nvidia Optimus is a computer GPU switching technology created by Nvidia which, depending on the resource load generated by client software applications, will seamlessly switch between two graphics adapters within a computer system in order to provide either maximum performance or minimum power draw from the system's graphics rendering hardware.

I make this point because Optimus is defined on the basis of providing *switching*, not just having two cards in a laptop and "accessing" them. It doesn't make any sense to me at all, then, to go on about the "several methods available" for "getting Optimus to work" and include 1) disabling one of the cards via BIOS or 2) using NVIDIA Optimus which, on linux at least, doesn't actually allow for switching. What benefit does this have above simply telling a user how to use the nvidia driver for a discrete graphics card or intel for an integrated one as part of typical system setup? I think an article about Optimus should be about what Optimus is and simply point to other methods to stick to only one or the other if the user wishes. As it is, I find this article to suggest something "special" is going to happen and then some of the methods aren't special at all and don't seem to have anything to do with Optimus.

Jwhendy (talk) 21:13, 28 December 2016 (UTC)Reply[reply]

That's quite an aged intro. Wikipedia describes mostly the state on the Windows platform, the Linux section is even older than "our" intro. In any case, if the Linux support is still not 100%, there you go with "something special is going to happen". The point is that Nvidia calls the technology Optimus even though it does not (or at least did not) support switching on Linux. -- Lahwaacz (talk) 21:34, 28 December 2016 (UTC)Reply[reply]
Hmmm. I may have to look more into that and didn't take into account that Windows vs. Linux would differ even in terms of what Optimus really means/implies. I find this subject confusing, but perhaps that's inherent, not necessarily meaning something is wrong with the explanations? Thanks for clarifying. Jwhendy (talk) 02:26, 29 December 2016 (UTC)Reply[reply]
You're not alone. It is confusing. One further quote that might help you Bumblebee#Installing Bumblebee with Intel.2FNVIDIA: "In Windows, the way that Optimus works is NVIDIA has a whitelist of applications that require Optimus for, and you can add applications to this whitelist as needed. When you launch the application, it automatically decides which card to use. To mimic this behavior in Linux, ..."
Another major reason for the confusion is that "Linux support for Optimus" not only depends on how the nvidia blob or nouveau, kernel etc implement it, but also differs considerably depending on the Laptop hardware you use (how the manufacturer implemented it; for example to which GPU the display and display ports are wired up). There simply is no common way to get the technology to work for Linux consistently across different hardware vendors/Laptop models. That's why the intro bullet points listing the generic options are not that bad IMO (maybe they could be re-ordered though). Hardware articles in Category:Laptops are there to bridge the gap to crosslink the working solutions for a specific box. --Indigo (talk) 10:23, 29 December 2016 (UTC)Reply[reply]
Thankfully, progress on this is in sight now, see https://lists.archlinux.org/archives/list/arch-dev-public@lists.archlinux.org/message/AAZSB6P35UOHYYWIOCLJ2ATOZEYZFG2M/
--Indigo (talk) 16:25, 14 February 2017 (UTC)Reply[reply]

Instructions on using nvidia not working

Following the instructions on using the discrete GPU no (longer?) work, in particular xrandr --setprovideroutputsource modesetting NVIDIA-0 leads to:

X Error of failed request:  BadValue (integer parameter out of range for operation)
 Major opcode of failed request:  140 (RANDR)
 Minor opcode of failed request:  35 (RRSetProviderOutputSource)
 Value in failed request:  0x1f9
 Serial number of failed request:  16
 Current serial number in output stream:  17

A similar error is reported here: [3]

xrandr --listproviders shows that NVIDIA-0 is available:

Provider 0: id: 0x1f9 cap: 0x0 crtcs: 4 outputs: 3 associated providers: 0 name:NVIDIA-0
Provider 1: id: 0x47 cap: 0x2, Sink Output crtcs: 3 outputs: 5 associated providers: 0 name:modesetting

SuperFluffy (talk) 12:14, 6 July 2017 (UTC)Reply[reply]

Failed to allocate fence signaling event Segfault

In regards to the issue reported here:

https://devtalk.nvidia.com/default/topic/1048609/linux/nvidia_drv-sometime-segfaults/

I've had success for the past few days simply by running linux-lts. Could this be added to the troubleshooting section?

Flatiron (talk) 15:34, 27 November 2019 (UTC) flatironReply[reply]

You don't need to ask permission. Anyone with an account can edit the wiki. If you feel it should be added then go ahead and add it. Worst case is you format it ugly and someone cleans up the formatting. It's only when you're planning a major re-write or something that anyone makes a big deal about discussing edits in advance. See ArchWiki:Contributing and the other articles under "Wiki Interaction" on the Main Page for more info. Bobpaul (talk) 21:42, 27 November 2019 (UTC)Reply[reply]

Make PRIME render offload the preferable way

With the advent of PRIME render offload and the fact that the Arch Linux xorg-server package has the necessary patches, I propose we recommend the usage of it, as well as doing some cleanup on the PRIME page or possibly even merging some stuff from there here. I was able to follow nvidia documentation, with a few minor changes. Also, it's important to note that bumblebee is not working anymore. Grazzolini (talk) 17:57, 28 November 2019 (UTC)Reply[reply]

Use NVIDIA graphics only

User:V1del, are you sure that creating the /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf file masks the /usr/share/X11/xorg.conf.d counterpart? Because testing for bug FS#64805, I found it not to be the case. Also, please not that this file was changed recently. Grazzolini (talk) 13:45, 6 January 2020 (UTC)Reply[reply]

I'm mostly going off of the reports in https://bbs.archlinux.org/viewtopic.php?pid=1880883 I know the file was changed recently, which is why I did the adjustment this way, people using this setup actually want nvidia as their primary GPU and the current change breaks things for them. I'm not entirely 100% on whether a mask happens, the man page is somewhat ambiguous here: On one hand it does mention that precedence ordering is honored:
Xorg supports  several  mechanisms for supplying/obtaining configuration and run-time parameters: command line options, environment variables, the xorg.conf and xorg.conf.d configuration files, auto-detection, and fallback defaults. When the same information is supplied in more than one way, the highest precedence mechanism is used. The list of mechanisms  is  ordered  from  highest precedence to lowest.
Which suggests to me that it would, on the other the docs for outputclass mention
OUTPUTCLASS SECTION
       The config file may have multiple OutputClass sections.  These sections are optional and are used to provide configuration for a class of output devices as they are automat‐
       ically added.  An output device can match more than one OutputClass section.  Each class can override settings from a previous class, so it is best to arrange  the  sections
       with the most generic matches first
So it might be that the options are simply aggregated. Since the new /usr/share file now provides just the minimum baseline, setting these options should have the desired effect regardless (appending the wanted AllowEmptyInitialConfig and PrimaryGPU, sorry for the weird code blocks, unsure how to use them with indent properly. V1del (talk) 15:49, 6 January 2020 (UTC)Reply[reply]
I have taken a look at the xorg.conf man page and they say that they look several locations AND then the /usr/share one. So I don't think it's masked at all, instead combined. The one provided with the package has the AllowEmptyInitialConfig already, so I don't think it's necessary. So, with that in mind, can you change your edit to make sure it doesn't say *masked* and *combined* instead? Thanks. Grazzolini (talk) 16:35, 6 January 2020 (UTC)Reply[reply]

Use switchable graphics

It might be good to mention somewhere that Bumblebee can be used within Wayland compositors to offload applications to the GPU in cases it wouldn't otherwise be supported. (No EGLStream support from compositor, xwayland) I'm not sure where it makes the most sense to put that. This page? The Wayland page? The Bumblebee page? nloewen (talk) 17:31, 8 December 2020 (UTC)Reply[reply]

The content should be added to Bumblebee and just leave a link to it from Wayland. --Fengchao (talk) 00:14, 17 December 2022 (UTC)Reply[reply]

Use NVIDIA graphics only is a viable option

The current description makes it seem like the NVIDIA-only setup is not a good permanent option.

Quote: "it should be utilized for troubleshooting and verifying general functionality, before opting for one of the more automated approaches"

The main argument is that it draws too much power. But if you don't have a super-high-end GPU, this shouldn't be much of a problem (in my case, it's 5 W for the graphics card out of 80 W for the laptop as a whole).

I had massive screen tearing problems with my Intel iGPU. Those are completely gone now that I use NVIDIA for everything (including browsing). Of course, you have to set the right module parameters.

I'm glad that I tried this option and didn't discard it outright "because the Arch Linux docs said so". Flemingalexander (talk) 16:19, 13 March 2024 (UTC)Reply[reply]