NVIDIA Optimus (Español)

From ArchWiki

Tango-preferences-desktop-locale.pngEste artículo o sección necesita ser traducido.Tango-preferences-desktop-locale.png

Notas: Partial translation. (Discusión en Talk:NVIDIA Optimus (Español)#)

NVIDIA Optimus es una tecnología que permite que una GPU integrada y una GPU NVIDIA dedicada trabajen en conjunto en un portátil.

Métodos disponibles

Hay diferentes métodos disponibles:

  • #Usar solo la grafica integrada[enlace roto: sección no válida] - ahorra energía, porque la grafica NVIDIA estará completamente apagada.
  • #Usar solo la grafica NVIDIA[enlace roto: sección no válida] - ofrece mas rendimiento que la grafica integrada pero consume mas batería (que no es conveniente para dispositivos móviles). Esto utiliza el mismo metodo de optimus-manager y nvidia-xrun , deberia ser utilizado para para resolución de problemas y verificación de la funcionalidad general, antes de optar por uno de los enfoques más automatizados.
  • Using both (use NVIDIA GPU when needed and keep it powered off to save power):
    • #Using PRIME render offload - official method supported by NVIDIA.
    • #Using optimus-manager - switches graphics with a single command (logout and login required to take effect). It achieves maximum performance out of NVIDIA GPU and switches it off if not in use. Since the 1.4 release AMD+NVIDIA combination is also supported.
    • #Using nvidia-xrun - run separate X session on different TTY with NVIDIA graphics. It achieves maximum performance out of NVIDIA GPU and switches it off if not in use.
    • #Using Bumblebee - provides Windows-like functionality by allowing to run selected applications with NVIDIA graphics while using Intel graphics for everything else. Has significant performance issues.
    • #Using switcheroo-control - Similar to Bumblebee, but specifically for GNOME users. Allows applications to specify if they prefer the dedicated GPU in their desktop entry file, and lets you manually run any application on the NVIDIA GPU from the right-click menu.
    • #Using nouveau - offers poorer performance (compared to the proprietary NVIDIA driver) and may cause issues with sleep and hibernate. Does not work with latest NVIDIA GPUs.
    • #Using EnvyControl - Similar to optimus-manager but does not require extensive configuration or having a daemon running in the background as well as having to install a patched version of GDM if you are a GNOME user.
    • #Using NVidia-eXec - Similar to Bumblebee, but without the performance impact. It works on both Xorg and Wayland. This package is experimental, and is currently being tested only under GNOME/GDM.
    • #Using nvidia-switch - Similar to nvidia-xrun, but not needing to change TTY, the switches will be done by login and logouts in your display manager. This package is being tested on Debian based system, but, like nvidia-xrun, it must work in all Linux systems.
Note: All of these options are mutually exclusive, if you test one approach and decide for another, you must ensure to revert any configuration changes done by following one approach before attempting another method, otherwise file conflicts and undefined behaviours may arise.

Use integrated graphics only

If you only care to use a certain GPU without switching, check the options in your system's BIOS. There should be an option to disable one of the cards. Some laptops only allow disabling of the discrete card, or vice-versa, but it is worth checking if you only plan to use just one of the cards.

If your BIOS does not allow to disable Nvidia graphics, you can disable it from the Linux kernel itself. See Hybrid graphics#Fully power down discrete GPU.

Use CUDA without switching the rendering provider

You can use CUDA without switching rendering to the Nvidia graphics. All you need to do is ensure that the Nvidia card is powered on before starting a CUDA application, see Hybrid graphics#Fully power down discrete GPU for details.

Now when you start a CUDA application, it will automatically load all necessary kernel modules. Before turning off the Nvidia card after using CUDA, the nvidia kernel modules have to be unloaded first:

# rmmod nvidia_uvm
# rmmod nvidia

Use NVIDIA graphics only

The proprietary NVIDIA driver can be configured to be the primary rendering provider. It also has notable screen-tearing issues unless you enable prime sync by enabling NVIDIA#DRM kernel mode setting, see [1] for further information. It does allow use of the discrete GPU and has (as of January 2017) a marked edge in performance over the nouveau driver.

First, install the NVIDIA driver and xorg-xrandr. Then, configure /etc/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf the options of which will be combined with the package provided /usr/share/X11/xorg.conf.d/10-nvidia-drm-outputclass.conf to provide compatibility with this setup.

Note: On some setups this setup breaks automatic detection of the values of the display by the nvidia driver through the EDID file. As a work-around see #Resolution, screen scan wrong. EDID errors in Xorg.log.
Section "OutputClass"
    Identifier "intel"
    MatchDriver "i915"
    Driver "modesetting"

Section "OutputClass"
    Identifier "nvidia"
    MatchDriver "nvidia-drm"
    Driver "nvidia"
    Option "AllowEmptyInitialConfiguration"
    Option "PrimaryGPU" "yes"
    ModulePath "/usr/lib/nvidia/xorg"
    ModulePath "/usr/lib/xorg/modules"

Next, add the following two lines to the beginning of your ~/.xinitrc:

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto

Now reboot to load the drivers, and X should start.

If your display dpi is not correct add the following line:

xrandr --dpi 96

If you get a black screen when starting X, make sure that there are no ampersands after the two xrandr commands in ~/.xinitrc. If there are ampersands, it seems that the window manager can run before the xrandr commands finish executing, leading to a black screen.

Display managers

If you are using a display manager then you will need to create or edit a display setup script for your display manager instead of using ~/.xinitrc.


For the LightDM display manager:

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto

Make the script executable.

Now configure lightdm to run the script by editing the [Seat:*] section in /etc/lightdm/lightdm.conf:


Now reboot and your display manager should start.


For the SDDM display manager (SDDM is the default DM for KDE):

xrandr --setprovideroutputsource modesetting NVIDIA-0
xrandr --auto


For the GDM display manager create two new .desktop files:

[Desktop Entry]
Exec=sh -c "xrandr --setprovideroutputsource modesetting NVIDIA-0; xrandr --auto"

Make sure that GDM use X as default backend.

Checking 3D

You can check if the NVIDIA graphics are being used by installing mesa-utils and running

$ glxinfo | grep NVIDIA

Further Information

For more information, look at NVIDIA's official page on the topic [2].

Use switchable graphics

Using PRIME render offload

This is the official NVIDIA method to support switchable graphics.

See PRIME#PRIME render offload for details.

Using nouveau

See PRIME for graphics switching and nouveau for open-source NVIDIA driver.

Using Bumblebee

See Bumblebee.

Using switcheroo-control

See PRIME#Gnome integration.

Using nvidia-xrun

See nvidia-xrun.

Using optimus-manager

See Optimus-manager upstream documentation. It covers both installation and configuration in Arch Linux systems.

Using EnvyControl

See EnvyControl upstream documentation. It covers both installation and usage instructions.

Using NVidia-eXec

See NVidia-eXec upstream documentation. It covers both installation and usage instructions.

Using nvidia-switch

See nvidia-switch upstream documentation. It covers both installation and usage instructions.


Tearing/Broken VSync

Enable DRM kernel mode setting, which will in turn enable the PRIME synchronization and fix the tearing.

You can read the official forum thread for details.

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)

Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters. Original topic can be found in [3] and [4].

Resolution, screen scan wrong. EDID errors in Xorg.log

This is due to the NVIDIA driver not detecting the EDID for the display. You need to manually specify the path to an EDID file or provide the same information in a similar way.

To provide the path to the EDID file edit the Device Section for the NVIDIA card in Xorg.conf, adding these lines and changing parts to reflect your own system:

Section "Device"
       	Option		"ConnectedMonitor" "CRT-0"
       	Option		"CustomEDID" "CRT-0:/sys/class/drm/card0-LVDS-1/edid"
	Option		"IgnoreEDID" "false"
	Option		"UseEDID" "true"

If Xorg will not start try swapping out all references of CRT to DFB. card0 is the identifier for the Intel card to which the display is connected via LVDS. The edid binary is in this directory. If the hardware arrangement is different, the value for CustomEDID might vary but yet this has to be confirmed. The path will start in any case with /sys/class/drm.

Alternatively you can generate your edid with tools like read-edid and point the driver to this file. Even modelines can be used, but then be sure to change UseEDID and IgnoreEDID.

Wrong resolution without EDID errors

Using nvidia-xconfig, incorrect information might be generated in xorg.conf and in particular wrong monitor refresh rates that restrict the possible resolutions. Try commenting out the HorizSync/VertRefresh lines. If this helps, you can probably also remove everything else not mentioned in this article.

Lockup issue (lspci hangs)

Symptoms: lspci hangs, system suspend fails, shutdown hangs, optirun hangs.

Applies to: newer laptops with GTX 965M or alike when bbswitch (e.g. via Bumblebee) or nouveau is in use.

When the dGPU power resource is turned on, it may fail to do so and hang in ACPI code (kernel bug 156341).

When using nouveau, disabling runtime power-management stops it from changing the power state, thus avoiding this issue. To disable runtime power-management, add nouveau.runpm=0 to the kernel parameters.

For known model-specific workarounds, see this issue. In other cases you can try to boot with acpi_osi="!Windows 2015" or acpi_osi=! acpi_osi="Windows 2009" added to your Kernel parameters. (Consider reporting your laptop to that issue.)

No screens found on a laptop/NVIDIA Optimus

Check if the output is something similar to:

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02)
01:00.0 VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)

NVIDIA drivers now offer Optimus support since 319.12 Beta [5] with kernels above and including 3.9.

Another solution is to install the Intel driver to handle the screens, then if you want 3D software you should run them through Bumblebee to tell them to use the NVIDIA card.

Random freezes "(EE) NVIDIA(GPU-0): WAIT"

Using the proprietary drivers on a setup with an integrated AMD card and with the dedicated NVIDIA card set as the only one in use, users report freezes for up to 10 seconds, with the following errors in the Xorg logs:

[   219.796] (EE) NVIDIA(GPU-0): WAIT (2, 8, 0x8000, 0x0002e1c4, 0x0002e1cc)
[   226.796] (EE) NVIDIA(GPU-0): WAIT (1, 8, 0x8000, 0x0002e1c4, 0x0002e1cc)

While this is not root-caused yet, it seems linked to a conflict in how the integrated and dedicated cards interact with Xorg.

The workaround is to use switchable graphics, see PRIME#PRIME render offload for details.

"No Devices detected" with optimus-manager

There are cases where lspci will show the pci Domain as first output column, making optimus-manager generated files break while trying to map BusID on multiple laptop models.

If you face a black screen that never ends to load your GUI, GUI partially loading with console artifacts or Xorg crashing with (EE) - No Devices detected, the workaround and bug reports are available at the upstream github.