Bumblebee: Difference between revisions

From ArchWiki
(Re-add edits by User:Erkexzcx where still applicable)
m (Fix typos: redundant "if"; wrong plural form)
 
(327 intermediate revisions by 96 users not shown)
Line 1: Line 1:
[[Category:Graphics]]
[[Category:Graphics]]
[[Category:X Server]]
[[Category:X server]]
[[es:Bumblebee]]
[[fr:Bumblebee]]
[[it:Bumblebee]]
[[ja:Bumblebee]]
[[ja:Bumblebee]]
[[ru:Bumblebee]]
[[ru:Bumblebee]]
[[tr:Bumblebee]]
[[zh-hans:Bumblebee]]
[[zh-CN:Bumblebee]]
{{Related articles start}}
{{Related articles start}}
{{Related|PRIME}}
{{Related|Nvidia-xrun}}
{{Related|NVIDIA Optimus}}
{{Related|NVIDIA Optimus}}
{{Related|Nouveau}}
{{Related|Nouveau}}
Line 14: Line 12:
{{Related|Intel graphics}}
{{Related|Intel graphics}}
{{Related articles end}}
{{Related articles end}}
From Bumblebee's [https://github.com/Bumblebee-Project/Bumblebee/wiki/FAQ FAQ]:
From Bumblebee's [https://github.com/Bumblebee-Project/Bumblebee/wiki/FAQ FAQ]:


"''Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.''"
:Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.
 
{{Note|1=Bumblebee has significant performance issues[https://github.com/Witko/nvidia-xrun/issues/4#issuecomment-153386837][https://bbs.archlinux.org/viewtopic.php?pid=1822926]. See [[NVIDIA Optimus]] for alternative solutions.}}


== Bumblebee: Optimus for Linux ==
== Bumblebee: Optimus for Linux ==


[http://www.nvidia.com/object/optimus_technology.html Optimus Technology] is an ''[http://hybrid-graphics-linux.tuxfamily.org/index.php?title=Hybrid_graphics hybrid graphics]'' implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.  
[https://www.nvidia.com/object/optimus_technology.html Optimus Technology] is a [https://hybrid-graphics-linux.tuxfamily.org/index.php?title=Hybrid_graphics hybrid graphics] implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.


Bumblebee is a software implementation comprising of two parts:
Bumblebee is a software implementation comprising two parts:


* Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
* Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
* Disable the dedicated video card when it is not in use (see the [[#Power management|Power management]] section)
* Disable the dedicated video card when it is not in use (see the [[#Power management]] section)


It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.
It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.
Line 31: Line 32:
== Installation ==
== Installation ==


Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS doesn't have to provide this option) and install the [[Intel graphics|Intel driver]] for the secondary on board graphics card.
Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS does not have to provide this option). If neither "Optimus" or "switchable" is in the BIOS, still make sure both GPUs will be enabled and that the integrated graphics (igfx) is initial display (primary display). The display should be connected to the onboard integrated graphics, not the discrete graphics card. If integrated graphics had previously been disabled and discrete graphics drivers installed, be sure to remove {{ic|/etc/X11/xorg.conf}} or the conf file in {{ic|/etc/X11/xorg.conf.d}} related to the discrete graphics card.
 
{{Tip|Do not install any packages listed below yet. Please skip to subsections for proper instructions.}}
 
* {{Pkg|bumblebee}} - The main package providing the daemon and client programs.
* {{Pkg|bbswitch}} (or {{AUR|bbswitch-dkms}}) - Disables NVIDIA discrete graphics card when it is not used.
* {{Pkg|primus}} (or {{AUR|primus-git}}), {{Pkg|virtualgl}} - A render/display bridge. Only one of them is necessary, but installing both of them side-by-side is alright.
* {{Pkg|lib32-primus}}, {{Pkg|lib32-virtualgl}} - A render/display bridge for 32 bit applications. [[Multilib]] must be enabled.
* {{Pkg|mesa-demos}} - Test program {{ic|glxgears}}.
* {{Pkg|virtualgl}} - Test program {{ic|glxspheres}} and {{ic|glxspheres64}}.


=== Installing Bumblebee with Intel/NVIDIA ===
[[Install]]:


Install:
* {{Pkg|bumblebee}} - The main package providing the daemon and client programs.
* {{Pkg|bumblebee}} - The main package providing the daemon and client programs.
* {{Pkg|mesa}} - An open-source implementation of the '''OpenGL''' specification.
* {{Pkg|mesa}} - An open-source implementation of the '''OpenGL''' specification.
* {{Pkg|xf86-video-intel}} - Intel driver.
* An appropriate version of the NVIDIA driver, see [[NVIDIA#Installation]].
* {{Pkg|nvidia}} - NVIDIA driver.
* Optionally install {{Pkg|xf86-video-intel}} - Intel [[Xorg]] driver.
For 32-bit ([[Multilib]] must be enabled) applications support on 64-bit machines, install:
* {{Pkg|lib32-nvidia-utils}}
* {{Pkg|lib32-mesa-libgl}}
* {{Pkg|lib32-mesa}} - if you intend to use {{ic|primusrun}}.


{{Warning|'''Do not''' install {{pkg|lib32-nvidia-libgl}}. Bumblebee will find the correct 32-bit NVIDIA libraries without it.}}
For 32-bit application support, enable the [[multilib]] repository and install:


{{Note|If you have {{Pkg|mesa}} and {{Pkg|xf86-video-intel}} installed, you will have to reinstall them together with the rest to avoid a dependency conflict between {{Pkg|mesa}} and {{Pkg|nvidia}}.}}
* {{Pkg|lib32-virtualgl}} - A render/display bridge for 32 bit applications.
* {{Pkg|lib32-nvidia-utils}} or {{AUR|lib32-nvidia-340xx-utils}} (match the version of the regular NVIDIA driver).


In order to use Bumblebee, it is necessary to add your regular ''user'' to the {{ic|bumblebee}} group:
In order to use Bumblebee, it is necessary to add your regular ''user'' to the {{ic|bumblebee}} group:
Line 62: Line 50:
  # gpasswd -a ''user'' bumblebee
  # gpasswd -a ''user'' bumblebee


Also [[Enable|enable]] {{ic|bumblebeed.service}}. Reboot your system and use the shell program {{ic|[[#Usage|optirun]]}} for Optimus NVIDIA rendering.
Also [[enable]] {{ic|bumblebeed.service}}. Reboot your system and follow [[#Usage]].


{{Tip|Skip to [[Bumblebee#Optimizing_speed| speed optimization]] if you want to improve the performance with Bumblebee.}}
{{Note|
 
* The {{Pkg|bumblebee}} package will install a kernel module blacklist file that prevents the {{ic|nvidia-drm}} module from loading on boot. Remember to uninstall this if you later switch away to other solutions.
=== Installing Bumblebee with Intel/Nouveau ===
* The package does not blacklist the {{ic|nvidiafb}} module. You probably do not have it installed, because the default kernels do not ship it. However, with other kernels you must explicitly blacklist it too, otherwise ''optirun'' and ''primusrun'' will not run. See {{Bug|69018}}.
 
}}
{{Expansion}}
 
{{Note|This method is deprecated and will most likely not work anymore. Use the nvidia module instead. If you want nouveau, use [[PRIME]].}}
 
Install:
* {{Pkg|xf86-video-nouveau}} - experimental 3D acceleration driver.
* {{Pkg|mesa}} - Mesa classic DRI with Gallium3D drivers and 3D graphics libraries.


== Usage ==
== Usage ==
Line 80: Line 61:
=== Test ===
=== Test ===


Test Bumblebee if it works with your Optimus system:
Install {{Pkg|mesa-utils}} and use {{ic|glxgears}} to test if Bumblebee works with your Optimus system:
 
  $ optirun glxgears -info
  $ optirun glxgears -info


If it fails, try the following commands:
If it fails, try the following command (from {{Pkg|virtualgl}}):


*64 bit system:
  $ optirun glxspheres64
  $ optirun glxspheres64
*32 bit system:
$ optirun glxspheres32


If the window with animation shows up - Optimus with Bumblebee is working.
If the window with animation shows up, Optimus with Bumblebee is working.


{{Note|If {{ic|glxgears}} failed, but {{ic|glxspheres''XX''}} worked, always replace "{{ic|glxgears}}" with "{{ic|glxspheres''XX''}}" in all cases.}}
{{Note|If {{ic|glxgears}} failed, but {{ic|glxspheres64}} worked, always replace {{ic|glxgears}} with {{ic|glxspheres64}} in all cases.}}


=== General usage ===
=== General usage ===
Line 106: Line 85:
  $ optirun -b none nvidia-settings -c :8
  $ optirun -b none nvidia-settings -c :8


For a list of the options for {{ic|optirun}}, view its manual page:
{{Note|A patched version of {{AUR|nvdock}} is available in the package {{AUR|nvdock-bumblebee}}.}}


$ man optirun
For a list of all available options, see {{man|1|optirun}}.


== Configuration ==
== Configuration ==
Line 115: Line 94:


=== Optimizing speed ===
=== Optimizing speed ===
One disadvantage of the offscreen rendering methods is performance. The following table gives a raw overview of a [[Lenovo ThinkPad T480]] in an eGPU setup with NVIDIA GTX 1060 6GB and {{AUR|unigine-heaven}} benchmark (1920x1080, max settings, 8x AA):
{| class="wikitable"
! Command !! Display !! FPS !! Score !! Min FPS !! Max FPS
|-
| optirun unigine-heaven || internal || 20.7 || 521 || 6.9 || 26.6
|-
| primusrun unigine-heaven || internal || 36.9 || 930 || 15.3 || 44.1
|-
| unigine-heaven || internal in [[Nvidia-xrun]] || 51.3 || 1293 || 8.4 || 95.6
|-
| unigine-heaven || external in [[Nvidia-xrun]] || 56.1 || 1414 || 8.4 || 111.9
|}


==== Using VirtualGL as bridge ====
==== Using VirtualGL as bridge ====
Line 120: Line 113:
Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:
Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:


To use an other compression method for a single application:
To use another compression method for a single application:


  $ optirun -c ''compress-method'' application
  $ optirun -c ''compress-method'' application
Line 126: Line 119:
The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.
The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.


Compressed methods
Compressed methods:
:*{{ic|jpeg}}
:*{{ic|rgb}}
:*{{ic|yuv}}


Uncompressed methods
:* {{ic|jpeg}}
:*{{ic|proxy}}
:* {{ic|rgb}}
:*{{ic|xv}}
:* {{ic|yuv}}


Here is a performance table tested with [[ASUS_N550JV|Asus N550JV]] laptop and benchmark app {{AUR|unigine-heaven}}:
Uncompressed methods:
 
:* {{ic|proxy}}
:* {{ic|xv}}
 
Here is a performance table tested with [[ASUS N550JV]] laptop and benchmark app {{AUR|unigine-heaven}}:


{| class="wikitable"
{| class="wikitable"
Line 152: Line 147:
| optirun -c xv unigine-heaven || 22.9 || 577 || 15.4 || 32.2
| optirun -c xv unigine-heaven || 22.9 || 577 || 15.4 || 32.2
|}
|}
{{Note|Lag spikes occurred when {{ic|jpeg}} compression method was used.}}
{{Note|Lag spikes occurred when {{ic|jpeg}} compression method was used.}}


Line 163: Line 159:
}}
}}


You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting {{ic|VGL_READBACK}} environment variable to {{ic|pbo}} should increase the performance. Compare these two:
You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting {{ic|VGL_READBACK}} [[environment variable]] to {{ic|pbo}} should increase the performance. Compare the following:
 
PBO should be faster:


# PBO should be faster.
  VGL_READBACK=pbo optirun glxgears
  VGL_READBACK=pbo optirun glxgears
# The default value is sync.
 
The default value is sync:
 
  VGL_READBACK=sync optirun glxgears
  VGL_READBACK=sync optirun glxgears


{{Note|CPU frequency scaling will affect directly on render performance}}
{{Note|CPU frequency scaling will affect directly on render performance.}}


==== Primusrun ====
==== Primusrun ====


{{ic|primusrun}} (package {{Pkg|primus}}) is becoming the default choice, because it is power consuming and provides a better performance than {{ic|optirun}}. Currently you need to run this program separately (it does not accept options
{{Note|Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended. See [[#Primus issues under compositing window managers]].}}
unlike {{ic|optirun}}), but in the future it will be started by optirun.  
 
''primusrun'' (from {{Pkg|primus}}) is becoming the default choice, because it consumes less power and sometimes provides better performance than {{ic|optirun}}/{{ic|virtualgl}}. It may be run separately, but it does not accept options as {{ic|optirun}} does. Setting {{ic|primus}} as the bridge for {{ic|optirun}} provides more flexibility.
 
For 32-bit applications support on 64-bit machines, install {{Pkg|lib32-primus}} ([[multilib]] must be enabled).
 
You can either run it separately:


Usage:
  $ primusrun glxgears
  $ primusrun glxgears


{{Tip|Refer to [[Bumblebee#Primusrun_mouse_delay.2Fdisable_VSYNC]] if you want to disable {{ic|VSYNC}}. It can also remove mouse input delay lag and slightly increase the performance.}}
Or as a bridge for ''optirun''. The default configuration sets {{ic|virtualgl}} as the bridge. Override that on the command line:
 
$ optirun -b primus glxgears
 
Alternatively, set {{ic|1=Bridge=primus}} in {{ic|/etc/bumblebee/bumblebee.conf}} and you will not have to specify it on the command line.
 
{{Tip|Refer to [[#Primusrun mouse delay (disable VSYNC)]] if you want to disable {{ic|VSYNC}}. It can also remove mouse input delay lag and slightly increase the performance.}}
 
==== Pvkrun ====
 
{{ic|pvkrun}} from the package {{Pkg|primus_vk}} is a drop-in replacement for {{ic|primusrun}} that enables to run [[Vulkan]]-based applications. A quick check can be done with {{ic|vulkaninfo}} from {{Pkg|vulkan-tools}}.
 
$ pvkrun vulkaninfo


=== Power management ===
=== Power management ===


The goal of power management feature is to turn off the NVIDIA card when it is not used by bumblebee any more.
{{Merge|Hybrid graphics#Using bbswitch|This section talks only about bbswitch which is not specific to Bumblebee.}}
If {{Pkg|bbswitch}} is installed, it will be detected automatically when the Bumblebee daemon starts. No additional
 
configuration is necessary.
The goal of the power management feature is to turn off the NVIDIA card when it is not used by Bumblebee any more. If {{Pkg|bbswitch}} (for {{Pkg|linux}}) or {{Pkg|bbswitch-dkms}} (for {{Pkg|linux-lts}} or custom kernels) is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary. However, {{Pkg|bbswitch}} is for [https://bugs.launchpad.net/ubuntu/+source/bbswitch/+bug/1338404/comments/6 Optimus laptops only and will not work on desktop computers]. So, Bumblebee power management is not available for desktop computers, and there is no reason to install {{Pkg|bbswitch}} on a desktop. (Nevertheless, the other features of Bumblebee do work on some desktop computers.)
 
To manually turn the card on or off using bbswitch, write into [https://github.com/Bumblebee-Project/bbswitch#turn-the-card-off-respectively-on /proc/acpi/bbswitch]:
 
# echo OFF > /proc/acpi/bbswitch
# echo ON > /proc/acpi/bbswitch


==== Default power state of NVIDIA card using bbswitch ====
==== Default power state of NVIDIA card using bbswitch ====


The default behavior of bbswitch is to leave the card power state unchanged. {{ic|bumblebeed}} does disable
The default behavior of bbswitch is to leave the card power state unchanged. {{ic|bumblebeed}} does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.
the card when started, so the following is only necessary if you use bbswitch without bumblebeed.
 
Set {{ic|load_state}} and {{ic|unload_state}} [[kernel module parameter]]s according to your needs (see [https://github.com/Bumblebee-Project/bbswitch bbswitch documentation]).


Set {{ic|load_state}} and {{ic|unload_state}} module options according to your needs (see [https://github.com/Bumblebee-Project/bbswitch bbswitch documentation]).
{{hc|/etc/modprobe.d/bbswitch.conf|2=
{{hc|/etc/modprobe.d/bbswitch.conf|2=
options bbswitch load_state=0 unload_state=1
options bbswitch load_state=0 unload_state=1
}}
}}
To run bbswitch without bumblebeed on system startup, do not forget to add {{ic|bbswitch}} to {{ic|/etc/modules-load.d}}, as explained in [[Kernel module#systemd]].


==== Enable NVIDIA card during shutdown ====
==== Enable NVIDIA card during shutdown ====
The NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. One option is to set {{ic|TurnCardOffAtExit=false}} in {{ic|/etc/bumblebee/bumblebee.conf}}, however this will enable the card everytime you stop the Bumblebee daemon, even if done manually.  To ensure that the NVIDIA card is always powered on during shutdown, add the following [[systemd]] service (if using {{pkg|bbswitch}}):
 
On some laptops, the NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. Therefore the Bumblebee daemon will power on the GPU when stopping the daemon (e.g. on shutdown) due to the (default) setting {{ic|1=TurnCardOffAtExit=false}} in {{ic|/etc/bumblebee/bumblebee.conf}}. Note that this setting does not influence power state while the daemon is running, so if all {{ic|optirun}} or {{ic|primusrun}} programs have exited, the GPU will still be powered off.
 
When you stop the daemon manually, you might want to keep the card powered off while still powering it on on shutdown. To achieve the latter, add the following [[systemd]] service (if using {{pkg|bbswitch}}):


{{hc|/etc/systemd/system/nvidia-enable.service|2=
{{hc|/etc/systemd/system/nvidia-enable.service|2=
Line 214: Line 239:
}}
}}


Then enable the service by running {{ic|systemctl enable nvidia-enable.service}} at the root prompt.
Then [[enable]] the {{ic|nvidia-enable.service}} unit.
 
==== Enable NVIDIA card after waking from suspend ====
 
The bumblebee daemon may fail to activate the graphics card after suspending. A possible fix involves setting {{Pkg|bbswitch}} as the default method for power management:
 
{{hc|/etc/bumblebee/bumblebee.conf|2=
[driver-nvidia]
PMMethod=bbswitch
 
[driver-nouveau]
PMMethod=bbswitch
}}
 
{{Note|This fix seems to work only after rebooting the system. Restarting the bumblebee service is not enough.}}
 
If the above fix fails, try the following command:
 
# echo 1 > /sys/bus/pci/rescan
 
To rescan the PCI bus automatically after a suspend, create a script as described in [[Power management/Suspend and hibernate#Hooks in /usr/lib/systemd/system-sleep]].


=== Multiple monitors ===
=== Multiple monitors ===
Line 220: Line 265:
==== Outputs wired to the Intel chip ====
==== Outputs wired to the Intel chip ====


If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.
If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with [[xorg.conf]]. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.


{{hc|/etc/X11/xorg.conf|2=
{{hc|/etc/X11/xorg.conf|2=
Line 231: Line 276:
     SubSection "Display"
     SubSection "Display"
         Depth          24
         Depth          24
         Modes          "1980x1080_60.00"
         Modes          "1920x1080_60.00"
     EndSubSection
     EndSubSection
EndSection
EndSection
Line 243: Line 288:
     SubSection "Display"
     SubSection "Display"
         Depth          24
         Depth          24
         Modes          "1980x1080_60.00"
         Modes          "1920x1080_60.00"
     EndSubSection
     EndSubSection
EndSection
EndSection
Line 260: Line 305:
     Identifier    "intelgpu0"
     Identifier    "intelgpu0"
     Driver        "intel"
     Driver        "intel"
    Option        "XvMC" "true"
     Option        "UseEvents" "true"
     Option        "UseEvents" "true"
     Option        "AccelMethod" "UXA"
     Option        "AccelMethod" "UXA"
Line 269: Line 313:
     Identifier    "intelgpu1"
     Identifier    "intelgpu1"
     Driver        "intel"
     Driver        "intel"
    Option        "XvMC" "true"
     Option        "UseEvents" "true"
     Option        "UseEvents" "true"
     Option        "AccelMethod" "UXA"
     Option        "AccelMethod" "UXA"
Line 276: Line 319:


Section "Device"
Section "Device"
     Identifier "nvidiagpu1"
     Identifier     "nvidiagpu1"
     Driver "nvidia"
     Driver         "nvidia"
     BusID "PCI:0:1:0"
     BusID         "PCI:0:1:0"
EndSection
EndSection
}}
}}


You need to probably change the BusID for both the Intel and the NVIDIA card.
You need to probably change the BusID for both the Intel and the NVIDIA card.


{{hc|<nowiki>$ lspci | grep VGA</nowiki>|
{{hc|$ lspci {{!}} grep VGA|
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
}}
}}


The BusID is 0:2:0
The BusID is 0:2:0. Note that ''lspci'' outputs hexadecimal values, but Xorg expects decimal values.


==== Output wired to the NVIDIA chip ====
==== Output wired to the NVIDIA chip ====


On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaniously, you have to run 2 X Servers. The first will be using the Intel driver for the notebooks panel and a display connected on VGA. The second will be started through optirun on the NVIDIA card, and drives the digital display.
On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaneously, the easiest solution is to use ''intel-virtual-output'', a tool provided in the {{Pkg|xf86-video-intel}} driver set, as of v2.99. It will allow you to extend the existing X session onto other screens, leveraging virtual outputs to work with the discrete graphics card. Usage is as follows:
 
{{hc|$ intel-virtual-output [OPTION]... [TARGET_DISPLAY]...|
-d <source display>  source display
-f                  keep in foreground (do not detach from console and daemonize)
-b                  start bumblebee
-a                   connect to all local displays (e.g. :1, :2, etc)
-S                  disable use of a singleton and launch a fresh intel-virtual-output process
-v                  all verbose output, implies -f
-V <category>        specific verbose output, implies -f
-h                  this help
}}


There are currently several instructions on the web how such a setup can be made to work. One can be found on the bumblebee [https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup wiki page]. Another approach is described below.
If this command alone does not work, you can try running it with optirun to enable the discrete graphics and allow it to detect the outputs accordingly. This is known to be necessary on Lenovo's Legion Y720.


===== xf86-video-intel-virtual-crtc and hybrid-screenclone =====
$ optirun intel-virtual-output


This method uses a patched Intel driver, which is extended to have a VIRTUAL Display, and the program hybrid-screenclone which is used to copy the display over from the virtual display to a second X Server which is running on the NVIDIA card using Optirun. Credit goes to [http://judsonsnotes.com/notes/index.php?option=com_content&view=article&id=673:triple-head-monitors-on-thinkpad-t520&catid=37:tech-notes&Itemid=59 Triple-head monitors on a Thinkpad T520] {{Dead link|2014|10|03}} which has a detailed explanation on how this is done on a Ubuntu system.
If no target displays are parsed on the commandline, ''intel-virtual-output'' will attempt to connect to any local display. The detected displays will be manageable via any desktop display manager such as xrandr or KDE Display. The tool will also start bumblebee (which may be left as default install). See the [https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup Bumblebee wiki page] for more information.


For simplicity, DP is used below to refer to the Digital Output (DisplayPort). The instructions should be the same if the notebook has a HDMI port instead.
When run in a terminal, ''intel-virtual-output'' will daemonize itself unless the {{ic|-f}} switch is used. Games can be run on the external screen by first exporting the display {{ic|1=export DISPLAY=:8}}, and then running the game with {{ic|optirun ''game_bin''}}, however, cursor and keyboard are not fully captured. Use {{ic|1=export DISPLAY=:0}} to revert back to standard operation.


* Set system to use NVIDIA card exclusively, test DP/Monitor combination and generate xorg.nvidia.conf. This step is not required, but recommended if your system Bios has an option to switch the graphics into NVIDIA-only mode. To do this, first uninstall the bumblebee package and install just the NVIDIA driver. Then reboot, enter the Bios and switch the Graphics to NVIDIA-only. When back in Arch, connect you Monitor on DP and use startx to test if it is working in principle. Use Xorg -configure to generate an xorg.conf file for your NVIDIA card. This will come in handy further down below.
If ''intel-virtual-output'' does not detect displays, or if a {{ic|no VIRTUAL outputs on ":0"}} message is obtained, then [[create]]:


* Reinstall bumlbebee and bbswitch, reboot and set the system Gfx back to Hybrid in the BIOS.
{{hc|/etc/X11/xorg.conf.d/20-intel.conf|
* Install {{AUR|xf86-video-intel-virtual-crtc}}, and replace your xf86-video-intel package with it.
Section "Device"
* Install {{AUR|screenclone-git}}
    Identifier    "intelgpu0"
* Change these bumblebee.conf settings:
    Driver         "intel"
{{hc|/etc/bumblebee/bumblebee.conf|2=
EndSection
KeepUnusedXServer=true
Driver=nvidia
}}
}}
{{Note|Leave the PMMethod set to "bumblebee". This is contrary to the instructions linked in the article above, but on arch this options needs to be left alone so that bbswitch module is automatically loaded}}
* Copy the xorg.conf generated in Step 1 to {{ic|/etc/X11}} (e.g. {{ic|/etc/X11/xorg.nvidia.conf}}). In the [driver-nvidia] section of {{ic|bumblebee.conf}}, change {{ic|XorgConfFile}} to point to it.
* Test if your {{ic|/etc/X11/xorg.nvidia.conf}} is working with {{ic|startx -- -config /etc/X11/xorg.nvidia.conf}}
* In order for your DP Monitor to show up with the correct resolution in your VIRTUAL Display you might have to edit the Monitor section in your {{ic|/etc/X11/xorg.nvidia.conf}}. Since this is extra work, you could try to continue with your auto-generated file. Come back to this step in the instructions if you find that the resolution of the VIRTUAL Display as shown by xrandr is not correct.
** First you have to generate a Modeline. You can use the tool [http://zi.fi/amlc/ amlc], which will genearte a Modeline if you input a few basic parameters.


::Example:  24" 1920x1080 Monitor
which does exist by default, and:
::start the tool with {{ic|amlc -c}}
{{bc|Monitor Identifier: Samsung 2494
Aspect Ratio: 2
physical size[cm]: 60
Ideal refresh rate, in Hz: 60
min HSync, kHz: 40
max HSync, kHz: 90
min VSync, Hz: 50
max VSync, Hz: 70
max pixel Clock, MHz: 400}}


This is the Monitor section which {{ic|amlc}} generated for this input:
{{hc|/etc/bumblebee/xorg.conf.nvidia|
{{bc|Section "Monitor"
    Identifier    "Samsung 2494"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149            # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494}}
 
Change your {{ic|xorg.nvidia.conf}} to include this Monitor section. You can also trim down your file so that it only contains ServerLayout, Monitor, Device and Screen sections. For reference, here is mine:
{{hc|/etc/X11/xorg.nvidia.conf|
Section "ServerLayout"
Section "ServerLayout"
        Identifier    "X.org Nvidia DP"
    Identifier    "Layout0"
         Screen      0  "Screen0" 0 0
    Option         "AutoAddDevices" "'''true'''"
        InputDevice    "Mouse0" "CorePointer"
    Option         "AutoAddGPU" "false"
         InputDevice    "Keyboard0" "CoreKeyboard"
EndSection
EndSection
Section "Monitor"
    Identifier    "Samsung 2494"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149            # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494


Section "Device"
Section "Device"
        Identifier "DiscreteNvidia"
    Identifier     "DiscreteNvidia"
         Driver      "nvidia"
    Driver         "nvidia"
         BusID      "PCI:1:0:0"
    VendorName    "NVIDIA Corporation"
    Option        "ProbeAllGpus" "false"
    Option        "NoLogo" "true"
    Option         "UseEDID" "'''true'''"
'''    Option        "AllowEmptyInitialConfiguration"'''
'''#'''    Option        "UseDisplayDevice" "none"
EndSection
EndSection


Section "Screen"
'''Section "Screen"'''
        Identifier "Screen0"
'''    Identifier     "Screen0"'''
         Device    "DiscreteNvidia"
'''    Device         "DiscreteNvidia"'''
        Monitor    "Samsung 2494"
'''EndSection'''
        SubSection "Display"
                Viewport  0 0
                Depth    24
        EndSubSection
EndSection
}}
* Plug in both external monitors and startx. Look at your {{ic|/var/log/Xorg.0.log}}. Check that your VGA Monitor is detected with the correct Modes there. You should also see a VIRTUAL output with modes show up.
* Run {{ic|xrandr}} and three displays should be listed there, along with the supported modes.
* If the listed Modelines for your VIRTUAL display doesn't have your Monitors native resolution, make note of the exact output name. For me that is {{ic|VIRTUAL1}}. Then have a look again in the Xorg.0.log file. You should see a message: "Output VIRTUAL1 has no monitor section" there. We will change this by putting a file with the needed Monitor section into {{ic|/etc/X11/xorg.conf.d}}. Exit and Restart X afterward.
{{hc|/etc/X11/xorg.conf.d/20-monitor_samsung.conf|
Section "Monitor"
    Identifier    "VIRTUAL1"
    ModelName      "Generated by Another Modeline Calculator"
    HorizSync      40-90
    VertRefresh    50-70
    DisplaySize    532 299  # Aspect ratio 1.778:1
    # Custom modes
    Modeline "1920x1080" 174.83 1920 2056 2248 2536 1080 1081 1084 1149            # 174.83 MHz,  68.94 kHz,  60.00 Hz
EndSection  # Samsung 2494
}}
}}
* Turn the NVIDIA card on by running: {{ic|sudo tee /proc/acpi/bbswitch <<< ON}}
* Start another X server for the DisplayPort monitor: {{ic|sudo optirun true}}
* Check the log of the second X server in {{ic|/var/log/Xorg.8.log}}
* Run xrandr to set up the VIRTUAL display to be the right size and placement, eg.: {{ic|xrandr --output VGA1 --auto --rotate normal --pos 0x0 --output VIRTUAL1 --mode 1920x1080 --right-of VGA1 --output LVDS1 --auto --rotate normal --right-of VIRTUAL1}}
* Take note of the position of the VIRTUAL display in the list of Outputs as shown by xrandr. The counting starts from zero, i.e. if it is the third display shown, you would specify {{ic|-x 2}} as parameter to {{ic|screenclone}} (Note: This might not always be correct. If you see your internal laptop display cloned on the monitor, try {{ic|-x 2}} anyway.)
* Clone the contents of the VIRTUAL display onto the X server created by bumblebee, which is connected to the DisplayPort monitor via the NVIDIA chip:
: {{ic|screenclone -d :8 -x 2}}


Thats it, all three displays should be up and running now.
See [https://unix.stackexchange.com/questions/321151/do-not-manage-to-activate-hdmi-on-a-laptop-that-has-optimus-bumblebee] for further configurations to try. If the laptop screen is stretched and the cursor is misplaced while the external monitor shows only the cursor, try killing any running compositing managers.


== Switch between discrete and integrated like Windows==
If you do not want to use ''intel-virtual-output'', another option is to configure Bumblebee to leave the discrete GPU on and directly configure X to use both the screens, as it will be able to detect them.


In Windows, the way that Optimus works is NVIDIA has a whitelist of applications that require Optimus for, and you can add applications to this whitelist as needed. When you launch the application, it automatically decides which card to use.
As a last resort, you can run 2 X Servers. The first will be using the Intel driver for the notebook's screen. The second will be started through optirun on the NVIDIA card, to show on the external display. Make sure to disable any display/session manager before manually starting your desktop environment with optirun. Then, you can log in the integrated-graphics powered one.


To mimic this behavior in Linux, you can use {{AUR|libgl-switcheroo-git}}. After installing, you can add the below in your .xprofile.
===== Disabling screen blanking =====


{{hc|~/.xprofile|2=
You can disable screen blanking when using ''intel-virtual-output'' with {{ic|xset}} by setting the {{ic|DISPLAY}} environment variable appropriately (see [[DPMS]] for more info):
mkdir -p /tmp/libgl-switcheroo-$USER/fs
gtkglswitch &
libgl-switcheroo /tmp/libgl-switcheroo-$USER/fs &
}}


To enable this, you must add the below to the shell that you intend to launch applications from (I simply added it to the .xprofile file)
  $ DISPLAY=:8 xset -dpms s off
  export LD_LIBRARY_PATH=/tmp/libgl-switcheroo-$USER/fs/\$LIB${LD_LIBRARY_PATH+:}$LD_LIBRARY_PATH


Once this has all been done, every application you launch from this shell will pop up a GTK+ window asking which card you want to run it with (you can also add an application to the whitelist in the configuration). The configuration is located in {{ic|$XDG_CONFIG_HOME/libgl-switcheroo.conf}}, usually {{ic|~/.config/libgl-switcheroo.conf}}
=== Multiple NVIDIA graphics cards or NVIDIA Optimus ===


{{Note|This tool acts by making a FUSE file system and then adding it into the dynamic library searching path, which may lead to slow spead or even segmentation faults when launching a software.}}
If you have multiple NVIDIA graphics cards (eg. when using an eGPU with a laptop with another built in NVIDIA graphics card) or NVIDIA Optimus, you need to make a minor edit to {{ic|/etc/bumblebee/xorg.conf.nvidia}}. If this change is not made the daemon may default to using the internal NVIDIA card.


== CUDA without Bumblebee==
First, determine the BusID of the external card:


You can use CUDA without bumblebee. All you need to do is ensure that the nvidia card is on:
{{hc|$ lspci {{!}} grep -E "VGA{{!}}3D"|
 
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
  # tee /proc/acpi/bbswitch <<< ON
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)
0b:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)
}}


Now when you start a CUDA application it is going to automatically load all the necessary modules.
In this case, the BusID is {{ic|0b:00.0}}.


To turn off the nvidia card after using CUDA do:
Now edit {{ic|/etc/bumblebee/xorg.conf.nvidia}} and add the following line to {{ic|Section "Device"}}:


  # rmmod nvidia_uvm
{{hc|/etc/bumblebee/xorg.conf.nvidia|
  # rmmod nvidia
Section "Device"
  # tee /proc/acpi/bbswitch <<< OFF
    ...
    BusID          "PCI:11:00:0"
    Option        "AllowExternalGpus" "true"  # If the GPU is external
    ...
EndSection
}}
 
{{Note|Notice that the hex {{ic|0b}} became a base10 {{ic|11}}.}}


== Troubleshooting ==
== Troubleshooting ==
Line 440: Line 436:
=== [VGL] ERROR: Could not open display :8 ===
=== [VGL] ERROR: Could not open display :8 ===


There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic")
There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic").


This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:
This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:
Line 452: Line 448:


If using NVIDIA drivers a fix for this problem is to edit {{ic|/etc/bumblebee/xorg.conf.nvidia}} and change Option {{ic|ConnectedMonitor}} to {{ic|CRT-0}}.
If using NVIDIA drivers a fix for this problem is to edit {{ic|/etc/bumblebee/xorg.conf.nvidia}} and change Option {{ic|ConnectedMonitor}} to {{ic|CRT-0}}.
=== Xlib: extension "GLX" missing on display ":0.0" ===
If you tried to install the NVIDIA driver from NVIDIA website, this is not going to work.
# Uninstall that driver in the similar way: {{bc|# ./NVIDIA-Linux-*.run --uninstall}}
# Remove the Xorg configuration file generated by NVIDIA: {{bc|# rm /etc/X11/xorg.conf}}
# (Re)install the correct NVIDIA driver: See [[#Installation]].


=== [ERROR]Cannot access secondary GPU: No devices detected ===
=== [ERROR]Cannot access secondary GPU: No devices detected ===
Line 461: Line 465:


In this case, you will need to move the file {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to somewhere else, [[restart]] the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to {{ic|/etc/X11/xorg.conf}}.
In this case, you will need to move the file {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to somewhere else, [[restart]] the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to {{ic|/etc/X11/xorg.conf}}.
 
It could be also necessary to comment the driver line in {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.
It could be also necessary to comment the driver line in {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.


If you're using the {{ic|nouveau}} driver you could try switching to the {{ic|nvidia}} driver.
If you are using the {{ic|nouveau}} driver you could try switching to the {{ic|nvidia}} driver.


You might need to define the NVIDIA card somewhere (e.g. file {{ic|/etc/X11/xorg.conf.d}}), using the correct {{ic|BusID}} according to {{ic|lspci}} output:
You might need to define the NVIDIA card somewhere (e.g. file {{ic|/etc/bumblebee/xorg.conf.nvidia}}), using the correct {{ic|BusID}} according to {{ic|lspci}} output:


{{bc|
{{bc|
Section "Device"
Section "Device"
     Identifier "nvidiagpu1"
     Identifier     "nvidiagpu1"
     Driver "nvidia"
     Driver         "nvidia"
     BusID "PCI:0:1:0"
     BusID         "PCI:0:1:0"
EndSection
EndSection
}}
}}
Observe that the format of {{ic|lspci}} output is in HEX, while in xorg it is in decimals. So if the output of {{ic|lspci}} is, for example, {{ic|0a:00.0}} the {{ic|BusID}} should be {{ic|PCI:10:0:0}}.


==== NVIDIA(0): Failed to assign any connected display devices to X screen 0 ====
==== NVIDIA(0): Failed to assign any connected display devices to X screen 0 ====
Line 483: Line 489:
  [ERROR]Aborting because fallback start is disabled.
  [ERROR]Aborting because fallback start is disabled.


You can change this line in {{ic|/etc/bumblebee/xorg.conf.nvidia}}:
If the following line in {{ic|/etc/bumblebee/xorg.conf.nvidia}} does not exist, you can add it to the "Device" section:


  Option "ConnectedMonitor" "DFP"
  Option "ConnectedMonitor" "DFP"


to:
If it does already exist, you can try changing it to:


  Option "ConnectedMonitor" "CRT"
  Option "ConnectedMonitor" "CRT"


==== systemd-logind: failed to get session: PID XXX does not belong to any known session ====
After that, restart the Bumblebee service to apply these changes.


If the console output is (''PID'' varies):
==== Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!) ====


[ERROR]Cannot access secondary GPU - error: [XORG] (EE) systemd-logind: failed to get session: PID 753 does not belong to any known session
Add {{ic|1=rcutree.rcu_idle_gp_delay=1}} to the [[kernel parameters]] of the [[boot loader]] configuration (see also the original [https://bbs.archlinux.org/viewtopic.php?id=169742 BBS post] for a configuration example).
[ERROR]Aborting because fallback start is disabled.


In /etc/mkinitcpio.conf change the MODULES var to:
==== Failed to initialize the NVIDIA GPU at PCI:1:0:0 (Bumblebee daemon reported: error: [XORG] (EE) NVIDIA(GPU-0)) ====


MODULES="i915 bbswitch"
You might encounter an issue when after resume from sleep, {{ic|primusrun}} or {{ic|optirun}} command does not work anymore. there are two ways to fix this issue - reboot your system or execute the following command:


Or:
# echo 1 > /sys/bus/pci/rescan


MODULES="i915 nouveau bbswitch"
And try to test if {{ic|primusrun}} or {{ic|optirun}} works.


And run:
If the above command did not help, try finding your NVIDIA card's bus ID:


# mkinitcpio -p linux
{{hc|$ lspci {{!}} grep VGA|
00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02)
'''01:00.0''' VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)
}}


You must run this command again after every kernel update.
For example, above command showed {{ic|01:00.0}} so we use following commands with this bus ID:


Additionally, the kernel parameter described [[#Failed_to_initialize_the_NVIDIA_GPU_at_PCI:1:0:0_.28GPU_fallen_off_the_bus_.2F_RmInitAdapter_failed.21.29|below ]] may need to be added to the [[Bootloaders|bootloader]] configuration.
# echo 1 > /sys/bus/pci/devices/0000:'''01:00.0'''/remove
 
  # echo 1 > /sys/bus/pci/rescan
==== Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!) ====
 
Add {{ic|1=rcutree.rcu_idle_gp_delay=1}} to the [[kernel parameters]] of the [[Bootloaders|bootloader]] configuration (see also the original [https://bbs.archlinux.org/viewtopic.php?id=169742 BBS post] for a configuration example).


==== Could not load GPU driver ====
==== Could not load GPU driver ====
Line 524: Line 529:
  [ERROR]Cannot access secondary GPU - error: Could not load GPU driver
  [ERROR]Cannot access secondary GPU - error: Could not load GPU driver


and if you try to load the nvidia module you get:
and if you try to load the nvidia module:


modprobe nvidia
{{hc|# modprobe nvidia|
modprobe: ERROR: could not insert 'nvidia': Exec format error
modprobe: ERROR: could not insert 'nvidia': Exec format error
}}


You should try manually compiling the nvidia packages against your current kernel, for example with {{aur|nvidia-dkms}} or by compiling {{pkg|nvidia}} from the [[ABS]].
This could be because the nvidia driver is out of sync with the Linux kernel, for example if you installed the latest nvidia driver and have not updated the kernel in a while. A full system update , followed by a reboot into the updated kernel, might resolve the issue. If the problem persists you should try manually compiling the nvidia packages against your current kernel, for example with {{Pkg|nvidia-dkms}} or by compiling {{pkg|nvidia}} from the [[ABS]].


==== NOUVEAU(0): [drm] failed to set drm interface version ====
==== NOUVEAU(0): [drm] failed to set drm interface version ====


Consider switching to the official nvidia driver. As commented [https://github.com/Bumblebee-Project/Bumblebee/issues/438#issuecomment-22005923 here], nouveau driver has some issues with some cards and bumblebee.
Consider switching to the official nvidia driver. As commented [https://github.com/Bumblebee-Project/Bumblebee/issues/438#issuecomment-22005923 here], nouveau driver has some issues with some cards and bumblebee.
=== [ERROR]Cannot access secondary GPU - error: X did not start properly ===
Set the {{ic|"AutoAddDevices"}} option to {{ic|"true"}} in {{ic|/etc/bumblebee/xorg.conf.nvidia}} (see [https://github.com/Bumblebee-Project/Bumblebee/issues/88 here]):
{{bc|
Section "ServerLayout"
    Identifier    "Layout0"
    Option        "AutoAddDevices" "true"
    Option        "AutoAddGPU" "false"
EndSection
}}


=== /dev/dri/card0: failed to set DRM interface version 1.4: Permission denied ===
=== /dev/dri/card0: failed to set DRM interface version 1.4: Permission denied ===
This could be worked around by appending following lines in {{ic|/etc/bumblebee/xorg.conf.nvidia}} (see [https://github.com/Bumblebee-Project/Bumblebee/issues/580 here]):
This could be worked around by appending following lines in {{ic|/etc/bumblebee/xorg.conf.nvidia}} (see [https://github.com/Bumblebee-Project/Bumblebee/issues/580 here]):
{{bc|
{{bc|
Section "Screen"
Section "Screen"
     Identifier "Default Screen"
     Identifier     "Default Screen"
     Device "DiscreteNvidia"
     Device         "DiscreteNvidia"
EndSection
EndSection
}}
}}
Line 546: Line 566:
=== ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored ===
=== ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored ===


You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "Note" box in [[#Installation|Installation]].
You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "For 32-bit..." section in [[#Installation]]. If the problem persists or if it is a 64-bit application, try using the [[#Primusrun|primus bridge]].


=== Fatal IO error 11 (Resource temporarily unavailable) on X server ===
=== Fatal IO error 11 (Resource temporarily unavailable) on X server ===


Change {{ic|KeepUnusedXServer}} in {{ic|/etc/bumblebee/bumblebee.conf}} from {{ic|false}} to {{ic|true}}. Your program forks into background and bumblebee don't know anything about it.
Change {{ic|KeepUnusedXServer}} in {{ic|/etc/bumblebee/bumblebee.conf}} from {{ic|false}} to {{ic|true}}. Your program forks into background and bumblebee do not know anything about it.


=== Video tearing ===
=== Video tearing ===


Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, run:  
Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, make sure {{Pkg|nvidia-settings}} is installed and run:


  $ optirun nvidia-settings -c :8
  $ optirun nvidia-settings -c :8
Line 560: Line 580:
{{ic|1=X Server XVideo Settings -> Sync to VBlank}} and {{ic|1=OpenGL Settings -> Sync to VBlank}} should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. {{ic|mplayer-vaapi}} and with {{ic|-vsync}} parameter).
{{ic|1=X Server XVideo Settings -> Sync to VBlank}} and {{ic|1=OpenGL Settings -> Sync to VBlank}} should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. {{ic|mplayer-vaapi}} and with {{ic|-vsync}} parameter).


Refer to the [[Intel#Video_tearing|Intel]] article on how to fix tearing on the Intel card.
Refer to [[Intel graphics#Tearing]] on how to fix tearing on the Intel card.


If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.
If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.
Line 569: Line 589:


  $ optirun glxspheres64
  $ optirun glxspheres64
or (for 32 bit):
or (for 32 bit):
{{hc|$ optirun glxspheres32|
{{hc|$ optirun glxspheres32|
[ 1648.179533] [ERROR]You've no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[ 1648.179533] [ERROR]You have no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?
}}
}}


If you are already in the {{ic|bumblebee}} group ({{ic|<nowiki>$ groups | grep bumblebee</nowiki>}}), you may try [https://bbs.archlinux.org/viewtopic.php?pid=1178729#p1178729 removing the socket] {{ic|/var/run/bumblebeed.socket}}.
If you are already in the {{ic|bumblebee}} group ({{ic|groups {{!}} grep bumblebee}}), you may try [https://bbs.archlinux.org/viewtopic.php?pid=1178729#p1178729 removing the socket] {{ic|/var/run/bumblebeed.socket}}.
 
Another reason for this error could be that you have not actually turned on both GPUs in your BIOS, and as a result, the Bumblebee daemon is in fact not running. Check the BIOS settings carefully and be sure Intel graphics (integrated graphics - may be abbreviated in BIOS as something like igfx) has been enabled or set to auto, and that it is the primary GPU. Your display should be connected to the onboard integrated graphics, not the discrete graphics card.
 
If you mistakenly had the display connected to the discrete graphics card and Intel graphics was disabled, you probably installed Bumblebee after first trying to run NVIDIA alone. In this case, be sure to remove the {{ic|/etc/X11/xorg.conf}} or {{ic|/etc/X11/xorg.conf.d/20-nvidia.conf}} configuration files. If Xorg is instructed to use NVIDIA in a configuration file, X will fail.


=== Running X.org from console after login (rootless X.org) ===
=== Running X.org from console after login (rootless X.org) ===


See [[Xorg#Rootless Xorg (v1.16)]].
See [[Xorg#Rootless Xorg]].


=== Primusrun mouse delay/disable VSYNC ===
=== Using Primus causes a segmentation fault ===


For {{ic|primusrun}}, {{ic|VSYNC}} is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test {{ic|primusrun}} without {{ic|VSYNC}}:
In some instances, using primusrun instead of optirun will result in a segfault. This is caused by an issue in code auto-detecting faster upload method, see {{Bug|58933}}.


$ vblank_mode=0 primusrun glxgears
The workaround is skipping auto-detection by manually setting {{ic|PRIMUS_UPLOAD}} [[environment variable]] to either 1 or 2, depending on which one is faster on your setup.


If you want to use it instead of {{ic|primusrun}}, create new file:
$ PRIMUS_UPLOAD=1 primusrun ...


{{hc|/usr/bin/optiprime|2=<nowiki>
=== Primusrun mouse delay (disable VSYNC) ===
#!/bin/sh
vblank_mode=0 primusrun "$@"
</nowiki>}}


Make it executable:
For {{ic|primusrun}}, {{ic|VSYNC}} is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test {{ic|primusrun}} with {{ic|VSYNC}} disabled:


  # chmod +x /usr/bin/optiprime
  $ vblank_mode=0 primusrun glxgears


Usage:
If you are satisfied with the above setting, create an [[alias]] (e.g. {{ic|1=alias primusrun="vblank_mode=0 primusrun"}}).


$ optiprime glxgears
Performance comparison:
 
In conclusion, it doesn't make significant performance improvement, but as mentioned above, it should remove mouse input delay lag.


{| class="wikitable"
{| class="wikitable"
! Command !! FPS !! Score !! Min FPS !! Max FPS
! VSYNC enabled !! FPS !! Score !! Min FPS !! Max FPS
|-
|-
| optiprime unigine-heaven || 31.5 || 793 || 22.3 || 54.8
| FALSE || 31.5 || 793 || 22.3 || 54.8
|-
|-
| primusrun unigine-heaven || 31.4 || 792 || 18.7 || 54.2
| TRUE || 31.4 || 792 || 18.7 || 54.2
|}
|}
''Tested with [[ASUS_N550JV|Asus N550JV]] laptop and benchmark app {{AUR|unigine-heaven}}.''
 
''Tested with [[ASUS N550JV]] notebook and benchmark app {{AUR|unigine-heaven}}.''
 
{{Note|To disable vertical synchronization system-wide, see [[Intel graphics#Disable Vertical Synchronization (VSYNC)]].}}
 
=== Primus issues under compositing window managers ===
 
Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended.[https://github.com/amonakov/primus#issues-under-compositing-wms]
If you need to use primus with compositing and see flickering or bad performance, synchronizing primus' display thread with the application's rendering thread may help:
 
$ PRIMUS_SYNC=1 primusrun ...
 
This makes primus display the previously rendered frame.
 
=== Problems with bumblebee after resuming from standby ===
 
In some systems, it can happens that the nvidia module is loaded after resuming from standby.
One possible solution for this is to install the {{pkg|acpi_call}} and {{pkg|acpi}} package.
 
=== Optirun does not work, no debug output ===
 
Users are reporting that in some cases, even though Bumblebee was installed correctly, running
 
$ optirun glxgears -info
 
gives no output at all, and the glxgears window does not appear. Any programs that need 3d acceleration crashes:
 
$ optirun bash
$ glxgears
Segmentation fault (core dumped)
 
Apparently it is a bug of some versions of virtualgl. So a workaround is to [[install]] {{Pkg|primus}} and {{Pkg|lib32-primus}} and use it instead:
 
$ primusrun glxspheres64
$ optirun -b primus glxspheres64
 
By default primus locks the framerate to the vrate of your monitor (usually 60 fps), if needed it can be unlocked by passing the {{ic|1=vblank_mode=0}} environment variable.
 
$ vblank_mode=0 primusrun glxspheres64
 
Usually there is no need to display more frames han your monitor can handle, but you might want to for benchmarking or to have faster reactions in games (e.g., if a game need 3 frames to react to a mouse movement with {{ic|1=vblank_mode=0}} the reaction will be as quick as your system can handle, without it will always need 1/20 of second).
 
You might want to edit {{ic|/etc/bumblebee/bumblebee.conf}} to use the primus render as default. If after an update you want to check if the bug has been fixed just use {{ic|optirun -b virtualgl}}.
 
See [https://bbs.archlinux.org/viewtopic.php?pid=1643609 this forum post] for more information.
 
=== Broken power management with kernel 4.8 ===
 
{{Merge|Hybrid graphics#Using bbswitch|Keep all info about bbswitch in one place.}}
 
If you have a newer laptop (BIOS date 2015 or newer), then Linux 4.8 might break bbswitch ([https://github.com/Bumblebee-Project/bbswitch/issues/140 bbswitch issue 140]) since bbswitch does not support the newer, recommended power management method. As a result, the GPU may fail to power on, fail to power off or worse.
 
As a workaround, add {{ic|1=pcie_port_pm=off}} to your [[Kernel parameters]].
 
Alternatively, if you are only interested in power saving (and perhaps use of external monitors), remove bbswitch and rely on [[Nouveau]] runtime power-management (which supports the new method).
 
{{Note|Some tools such as {{ic|powertop --auto-tune}} automatically enable power management on PCI devices, which leads to the same problem [https://github.com/Bumblebee-Project/bbswitch/issues/159]. Use the same workaround or do not use the all-in-one power management tools.}}
 
=== Lockup issue (lspci hangs) ===
 
See [[NVIDIA Optimus#Lockup issue (lspci hangs)]] for an issue that affects new laptops with a GTX 965M (or alike).
 
=== Discrete card always on and acpi warnings ===
 
Add {{ic|1=acpi_osi=Linux}} to your [[Kernel parameters]].
See [https://github.com/Bumblebee-Project/Bumblebee/issues/592] and [https://github.com/Bumblebee-Project/bbswitch/issues/112] for more information.
 
=== Screen 0 deleted because of no matching config section ===
 
Modify the configuration as follows:
 
{{hc|/etc/bumblebee/xorg.conf.nvidia|
...
Section "ServerLayout"
...
    Screen 0      "nvidia"
...
EndSection
...
Section "Screen"
    Identifier    "nvidia"
    Device        "DiscreteNvidia"
EndSection
...
}}
 
=== Erratic, unpredictable behaviour ===
 
If Bumblebee starts/works in a random manner, check that you have set your [[Network configuration#Local network hostname resolution]] (details [https://github.com/Bumblebee-Project/Bumblebee/pull/939 here]).
 
=== Discrete card always on and nvidia driver cannot be unloaded ===
 
Make sure {{ic|nvidia-persistenced.service}} is disabled and not currently active. It is intended to keep the {{ic|nvidia}} driver running at all times [https://us.download.nvidia.com/XFree86/Linux-x86_64/367.57/README/nvidia-persistenced.html], which prevents the card being turned off.
 
=== Discrete card is silently activated when EGL is requested by some application ===
 
If the discrete card is activated by some program (e.g. {{Pkg|mpv}} with its GPU backend), it might stays on.
The problem might be {{ic|libglvnd}} which is loading the {{Pkg|nvidia}} drivers and activating the card.
 
To disable this set environment variable {{ic|__EGL_VENDOR_LIBRARY_FILENAMES}} (see [https://github.com/NVIDIA/libglvnd/blob/master/src/EGL/icd_enumeration.md documentation]) to only load mesa configuration file:
 
__EGL_VENDOR_LIBRARY_FILENAMES="/usr/share/glvnd/egl_vendor.d/50_mesa.json"
 
{{Pkg|nvidia-utils}} (and its branches) is installing the configuration file at {{ic|/usr/share/glvnd/egl_vendor.d/10_nvidia.json}} which has priority and causes libglvnd to load the {{Pkg|nvidia}} drivers and enable the card.
 
The other solution is to [[Pacman#Skip files from being installed to system|avoid installing]] the configuration file provided by {{Pkg|nvidia-utils}}.
 
=== Framerate drops to 1 FPS after a fixed period of time ===
 
With the nvidia 440.36 driver, the [https://devtalk.nvidia.com/default/topic/1067676/linux/440-36-with-bumblebee-drops-to-1-fps-after-running-for-10-minutes/post/5409047/#5409047 DPMS setting is enabled by default] resulting in a timeout after a fixed period of time (e.g. 10 minutes) which causes the frame rate to throttle down to 1 FPS. To work around this, add the following line to the "Device" section in {{ic|/etc/bumblebee/xorg.conf.nvidia}}
 
{{bc|Option "HardDPMS" "false"}}
 
=== Application cannot record screen ===
 
Using Bumblebee, applications cannot access the screen to identify and record it. This happens, for example, using {{Pkg|obs-studio}} with NVENC activated. To solve this, disable the bridging mode with {{ic|optirun -b none command}}.


== See also ==
== See also ==


* [http://www.bumblebee-project.org Bumblebee project repository]
* [https://www.bumblebee-project.org/ Bumblebee project repository]{{Dead link|2022|09|17|status=SSL error}}
* [http://wiki.bumblebee-project.org/ Bumblebee project wiki]
* [https://github.com/Bumblebee-Project/Bumblebee/wiki Bumblebee project wiki]
* [https://github.com/Bumblebee-Project/bbswitch Bumblebee project bbswitch repository]
* [https://github.com/Bumblebee-Project/bbswitch Bumblebee project bbswitch repository]
Join us at #bumblebee at freenode.net.

Latest revision as of 21:59, 5 March 2024

From Bumblebee's FAQ:

Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.
Note: Bumblebee has significant performance issues[1][2]. See NVIDIA Optimus for alternative solutions.

Bumblebee: Optimus for Linux

Optimus Technology is a hybrid graphics implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.

Bumblebee is a software implementation comprising two parts:

  • Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
  • Disable the dedicated video card when it is not in use (see the #Power management section)

It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.

Installation

Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS does not have to provide this option). If neither "Optimus" or "switchable" is in the BIOS, still make sure both GPUs will be enabled and that the integrated graphics (igfx) is initial display (primary display). The display should be connected to the onboard integrated graphics, not the discrete graphics card. If integrated graphics had previously been disabled and discrete graphics drivers installed, be sure to remove /etc/X11/xorg.conf or the conf file in /etc/X11/xorg.conf.d related to the discrete graphics card.

Install:

  • bumblebee - The main package providing the daemon and client programs.
  • mesa - An open-source implementation of the OpenGL specification.
  • An appropriate version of the NVIDIA driver, see NVIDIA#Installation.
  • Optionally install xf86-video-intel - Intel Xorg driver.

For 32-bit application support, enable the multilib repository and install:

In order to use Bumblebee, it is necessary to add your regular user to the bumblebee group:

# gpasswd -a user bumblebee

Also enable bumblebeed.service. Reboot your system and follow #Usage.

Note:
  • The bumblebee package will install a kernel module blacklist file that prevents the nvidia-drm module from loading on boot. Remember to uninstall this if you later switch away to other solutions.
  • The package does not blacklist the nvidiafb module. You probably do not have it installed, because the default kernels do not ship it. However, with other kernels you must explicitly blacklist it too, otherwise optirun and primusrun will not run. See FS#69018.

Usage

Test

Install mesa-utils and use glxgears to test if Bumblebee works with your Optimus system:

$ optirun glxgears -info

If it fails, try the following command (from virtualgl):

$ optirun glxspheres64

If the window with animation shows up, Optimus with Bumblebee is working.

Note: If glxgears failed, but glxspheres64 worked, always replace glxgears with glxspheres64 in all cases.

General usage

$ optirun [options] application [application-parameters]

For example, start Windows applications with Optimus:

$ optirun wine application.exe

For another example, open NVIDIA Settings panel with Optimus:

$ optirun -b none nvidia-settings -c :8
Note: A patched version of nvdockAUR is available in the package nvdock-bumblebeeAUR.

For a list of all available options, see optirun(1).

Configuration

You can configure the behaviour of Bumblebee to fit your needs. Fine tuning like speed optimization, power management and other stuff can be configured in /etc/bumblebee/bumblebee.conf

Optimizing speed

One disadvantage of the offscreen rendering methods is performance. The following table gives a raw overview of a Lenovo ThinkPad T480 in an eGPU setup with NVIDIA GTX 1060 6GB and unigine-heavenAUR benchmark (1920x1080, max settings, 8x AA):

Command Display FPS Score Min FPS Max FPS
optirun unigine-heaven internal 20.7 521 6.9 26.6
primusrun unigine-heaven internal 36.9 930 15.3 44.1
unigine-heaven internal in Nvidia-xrun 51.3 1293 8.4 95.6
unigine-heaven external in Nvidia-xrun 56.1 1414 8.4 111.9

Using VirtualGL as bridge

Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:

To use another compression method for a single application:

$ optirun -c compress-method application

The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.

Compressed methods:

  • jpeg
  • rgb
  • yuv

Uncompressed methods:

  • proxy
  • xv

Here is a performance table tested with ASUS N550JV laptop and benchmark app unigine-heavenAUR:

Command FPS Score Min FPS Max FPS
optirun unigine-heaven 25.0 630 16.4 36.1
optirun -c jpeg unigine-heaven 24.2 610 9.5 36.8
optirun -c rgb unigine-heaven 25.1 632 16.6 35.5
optirun -c yuv unigine-heaven 24.9 626 16.5 35.8
optirun -c proxy unigine-heaven 25.0 629 16.0 36.1
optirun -c xv unigine-heaven 22.9 577 15.4 32.2
Note: Lag spikes occurred when jpeg compression method was used.

To use a standard compression for all applications, set the VGLTransport to compress-method in /etc/bumblebee/bumblebee.conf:

/etc/bumblebee/bumblebee.conf
[...]
[optirun]
VGLTransport=proxy
[...]

You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting VGL_READBACK environment variable to pbo should increase the performance. Compare the following:

PBO should be faster:

VGL_READBACK=pbo optirun glxgears

The default value is sync:

VGL_READBACK=sync optirun glxgears
Note: CPU frequency scaling will affect directly on render performance.

Primusrun

Note: Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended. See #Primus issues under compositing window managers.

primusrun (from primus) is becoming the default choice, because it consumes less power and sometimes provides better performance than optirun/virtualgl. It may be run separately, but it does not accept options as optirun does. Setting primus as the bridge for optirun provides more flexibility.

For 32-bit applications support on 64-bit machines, install lib32-primus (multilib must be enabled).

You can either run it separately:

$ primusrun glxgears

Or as a bridge for optirun. The default configuration sets virtualgl as the bridge. Override that on the command line:

$ optirun -b primus glxgears

Alternatively, set Bridge=primus in /etc/bumblebee/bumblebee.conf and you will not have to specify it on the command line.

Tip: Refer to #Primusrun mouse delay (disable VSYNC) if you want to disable VSYNC. It can also remove mouse input delay lag and slightly increase the performance.

Pvkrun

pvkrun from the package primus_vk is a drop-in replacement for primusrun that enables to run Vulkan-based applications. A quick check can be done with vulkaninfo from vulkan-tools.

$ pvkrun vulkaninfo

Power management

This article or section is a candidate for merging with Hybrid graphics#Using bbswitch.

Notes: This section talks only about bbswitch which is not specific to Bumblebee. (Discuss in Talk:Bumblebee)

The goal of the power management feature is to turn off the NVIDIA card when it is not used by Bumblebee any more. If bbswitch (for linux) or bbswitch-dkms (for linux-lts or custom kernels) is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary. However, bbswitch is for Optimus laptops only and will not work on desktop computers. So, Bumblebee power management is not available for desktop computers, and there is no reason to install bbswitch on a desktop. (Nevertheless, the other features of Bumblebee do work on some desktop computers.)

To manually turn the card on or off using bbswitch, write into /proc/acpi/bbswitch:

# echo OFF > /proc/acpi/bbswitch
# echo ON > /proc/acpi/bbswitch

Default power state of NVIDIA card using bbswitch

The default behavior of bbswitch is to leave the card power state unchanged. bumblebeed does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.

Set load_state and unload_state kernel module parameters according to your needs (see bbswitch documentation).

/etc/modprobe.d/bbswitch.conf
options bbswitch load_state=0 unload_state=1

To run bbswitch without bumblebeed on system startup, do not forget to add bbswitch to /etc/modules-load.d, as explained in Kernel module#systemd.

Enable NVIDIA card during shutdown

On some laptops, the NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. Therefore the Bumblebee daemon will power on the GPU when stopping the daemon (e.g. on shutdown) due to the (default) setting TurnCardOffAtExit=false in /etc/bumblebee/bumblebee.conf. Note that this setting does not influence power state while the daemon is running, so if all optirun or primusrun programs have exited, the GPU will still be powered off.

When you stop the daemon manually, you might want to keep the card powered off while still powering it on on shutdown. To achieve the latter, add the following systemd service (if using bbswitch):

/etc/systemd/system/nvidia-enable.service
[Unit]
Description=Enable NVIDIA card
DefaultDependencies=no

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo ON > /proc/acpi/bbswitch'

[Install]
WantedBy=shutdown.target

Then enable the nvidia-enable.service unit.

Enable NVIDIA card after waking from suspend

The bumblebee daemon may fail to activate the graphics card after suspending. A possible fix involves setting bbswitch as the default method for power management:

/etc/bumblebee/bumblebee.conf
[driver-nvidia]
PMMethod=bbswitch

[driver-nouveau]
PMMethod=bbswitch
Note: This fix seems to work only after rebooting the system. Restarting the bumblebee service is not enough.

If the above fix fails, try the following command:

# echo 1 > /sys/bus/pci/rescan

To rescan the PCI bus automatically after a suspend, create a script as described in Power management/Suspend and hibernate#Hooks in /usr/lib/systemd/system-sleep.

Multiple monitors

Outputs wired to the Intel chip

If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.

/etc/X11/xorg.conf
Section "Screen"
    Identifier     "Screen0"
    Device         "intelgpu0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1920x1080_60.00"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "intelgpu1"
    Monitor        "Monitor1"
    DefaultDepth   24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1920x1080_60.00"
    EndSubSection
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    Option         "Enable" "true"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    Option         "Enable" "true"
EndSection

Section "Device"
    Identifier     "intelgpu0"
    Driver         "intel"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier     "intelgpu1"
    Driver         "intel"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier     "nvidiagpu1"
    Driver         "nvidia"
    BusID          "PCI:0:1:0"
EndSection

You need to probably change the BusID for both the Intel and the NVIDIA card.

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)

The BusID is 0:2:0. Note that lspci outputs hexadecimal values, but Xorg expects decimal values.

Output wired to the NVIDIA chip

On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaneously, the easiest solution is to use intel-virtual-output, a tool provided in the xf86-video-intel driver set, as of v2.99. It will allow you to extend the existing X session onto other screens, leveraging virtual outputs to work with the discrete graphics card. Usage is as follows:

$ intel-virtual-output [OPTION]... [TARGET_DISPLAY]...
-d <source display>  source display
-f                   keep in foreground (do not detach from console and daemonize)
-b                   start bumblebee
-a                   connect to all local displays (e.g. :1, :2, etc)
-S                   disable use of a singleton and launch a fresh intel-virtual-output process
-v                   all verbose output, implies -f
-V <category>        specific verbose output, implies -f
-h                   this help

If this command alone does not work, you can try running it with optirun to enable the discrete graphics and allow it to detect the outputs accordingly. This is known to be necessary on Lenovo's Legion Y720.

$ optirun intel-virtual-output

If no target displays are parsed on the commandline, intel-virtual-output will attempt to connect to any local display. The detected displays will be manageable via any desktop display manager such as xrandr or KDE Display. The tool will also start bumblebee (which may be left as default install). See the Bumblebee wiki page for more information.

When run in a terminal, intel-virtual-output will daemonize itself unless the -f switch is used. Games can be run on the external screen by first exporting the display export DISPLAY=:8, and then running the game with optirun game_bin, however, cursor and keyboard are not fully captured. Use export DISPLAY=:0 to revert back to standard operation.

If intel-virtual-output does not detect displays, or if a no VIRTUAL outputs on ":0" message is obtained, then create:

/etc/X11/xorg.conf.d/20-intel.conf
Section "Device"
    Identifier     "intelgpu0"
    Driver         "intel"
EndSection

which does exist by default, and:

/etc/bumblebee/xorg.conf.nvidia
Section "ServerLayout"
    Identifier     "Layout0"
    Option         "AutoAddDevices" "true"
    Option         "AutoAddGPU" "false"
EndSection

Section "Device"
    Identifier     "DiscreteNvidia"
    Driver         "nvidia"
    VendorName     "NVIDIA Corporation"
    Option         "ProbeAllGpus" "false"
    Option         "NoLogo" "true"
    Option         "UseEDID" "true"
    Option         "AllowEmptyInitialConfiguration"
#    Option         "UseDisplayDevice" "none"
EndSection

Section "Screen"
    Identifier     "Screen0"
    Device         "DiscreteNvidia"
EndSection

See [3] for further configurations to try. If the laptop screen is stretched and the cursor is misplaced while the external monitor shows only the cursor, try killing any running compositing managers.

If you do not want to use intel-virtual-output, another option is to configure Bumblebee to leave the discrete GPU on and directly configure X to use both the screens, as it will be able to detect them.

As a last resort, you can run 2 X Servers. The first will be using the Intel driver for the notebook's screen. The second will be started through optirun on the NVIDIA card, to show on the external display. Make sure to disable any display/session manager before manually starting your desktop environment with optirun. Then, you can log in the integrated-graphics powered one.

Disabling screen blanking

You can disable screen blanking when using intel-virtual-output with xset by setting the DISPLAY environment variable appropriately (see DPMS for more info):

$ DISPLAY=:8 xset -dpms s off

Multiple NVIDIA graphics cards or NVIDIA Optimus

If you have multiple NVIDIA graphics cards (eg. when using an eGPU with a laptop with another built in NVIDIA graphics card) or NVIDIA Optimus, you need to make a minor edit to /etc/bumblebee/xorg.conf.nvidia. If this change is not made the daemon may default to using the internal NVIDIA card.

First, determine the BusID of the external card:

$ lspci | grep -E "VGA|3D"
00:02.0 VGA compatible controller: Intel Corporation HD Graphics 530 (rev 06)
01:00.0 3D controller: NVIDIA Corporation GM107M [GeForce GTX 960M] (rev a2)
0b:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1)

In this case, the BusID is 0b:00.0.

Now edit /etc/bumblebee/xorg.conf.nvidia and add the following line to Section "Device":

/etc/bumblebee/xorg.conf.nvidia
Section "Device"
    ...
    BusID          "PCI:11:00:0"
    Option         "AllowExternalGpus" "true"  # If the GPU is external
    ...
EndSection
Note: Notice that the hex 0b became a base10 11.

Troubleshooting

Note: Please report bugs at Bumblebee-Project's GitHub tracker as described in its wiki.

[VGL] ERROR: Could not open display :8

There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic").

This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:

$ optirun -b primus wine windows program.exe

If this does not work, an alternative walkaround for this problem is:

$ optirun bash
$ optirun wine windows program.exe

If using NVIDIA drivers a fix for this problem is to edit /etc/bumblebee/xorg.conf.nvidia and change Option ConnectedMonitor to CRT-0.

Xlib: extension "GLX" missing on display ":0.0"

If you tried to install the NVIDIA driver from NVIDIA website, this is not going to work.

  1. Uninstall that driver in the similar way:
    # ./NVIDIA-Linux-*.run --uninstall
  2. Remove the Xorg configuration file generated by NVIDIA:
    # rm /etc/X11/xorg.conf
  3. (Re)install the correct NVIDIA driver: See #Installation.

[ERROR]Cannot access secondary GPU: No devices detected

In some instances, running optirun will return:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.
[ERROR]Aborting because fallback start is disabled.

In this case, you will need to move the file /etc/X11/xorg.conf.d/20-intel.conf to somewhere else, restart the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge /etc/X11/xorg.conf.d/20-intel.conf to /etc/X11/xorg.conf.

It could be also necessary to comment the driver line in /etc/X11/xorg.conf.d/10-monitor.conf.

If you are using the nouveau driver you could try switching to the nvidia driver.

You might need to define the NVIDIA card somewhere (e.g. file /etc/bumblebee/xorg.conf.nvidia), using the correct BusID according to lspci output:

Section "Device"
    Identifier     "nvidiagpu1"
    Driver         "nvidia"
    BusID          "PCI:0:1:0"
EndSection

Observe that the format of lspci output is in HEX, while in xorg it is in decimals. So if the output of lspci is, for example, 0a:00.0 the BusID should be PCI:10:0:0.

NVIDIA(0): Failed to assign any connected display devices to X screen 0

If the console output is:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0
[ERROR]Aborting because fallback start is disabled.

If the following line in /etc/bumblebee/xorg.conf.nvidia does not exist, you can add it to the "Device" section:

Option "ConnectedMonitor" "DFP"

If it does already exist, you can try changing it to:

Option "ConnectedMonitor" "CRT"

After that, restart the Bumblebee service to apply these changes.

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)

Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters of the boot loader configuration (see also the original BBS post for a configuration example).

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (Bumblebee daemon reported: error: [XORG] (EE) NVIDIA(GPU-0))

You might encounter an issue when after resume from sleep, primusrun or optirun command does not work anymore. there are two ways to fix this issue - reboot your system or execute the following command:

# echo 1 > /sys/bus/pci/rescan

And try to test if primusrun or optirun works.

If the above command did not help, try finding your NVIDIA card's bus ID:

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 02)
01:00.0 VGA compatible controller: nVidia Corporation Device 0df4 (rev a1)

For example, above command showed 01:00.0 so we use following commands with this bus ID:

# echo 1 > /sys/bus/pci/devices/0000:01:00.0/remove
# echo 1 > /sys/bus/pci/rescan

Could not load GPU driver

If the console output is:

[ERROR]Cannot access secondary GPU - error: Could not load GPU driver

and if you try to load the nvidia module:

# modprobe nvidia
modprobe: ERROR: could not insert 'nvidia': Exec format error

This could be because the nvidia driver is out of sync with the Linux kernel, for example if you installed the latest nvidia driver and have not updated the kernel in a while. A full system update , followed by a reboot into the updated kernel, might resolve the issue. If the problem persists you should try manually compiling the nvidia packages against your current kernel, for example with nvidia-dkms or by compiling nvidia from the ABS.

NOUVEAU(0): [drm] failed to set drm interface version

Consider switching to the official nvidia driver. As commented here, nouveau driver has some issues with some cards and bumblebee.

[ERROR]Cannot access secondary GPU - error: X did not start properly

Set the "AutoAddDevices" option to "true" in /etc/bumblebee/xorg.conf.nvidia (see here):

Section "ServerLayout"
    Identifier     "Layout0"
    Option         "AutoAddDevices" "true"
    Option         "AutoAddGPU" "false"
EndSection

/dev/dri/card0: failed to set DRM interface version 1.4: Permission denied

This could be worked around by appending following lines in /etc/bumblebee/xorg.conf.nvidia (see here):

Section "Screen"
    Identifier     "Default Screen"
    Device         "DiscreteNvidia"
EndSection

ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored

You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "For 32-bit..." section in #Installation. If the problem persists or if it is a 64-bit application, try using the primus bridge.

Fatal IO error 11 (Resource temporarily unavailable) on X server

Change KeepUnusedXServer in /etc/bumblebee/bumblebee.conf from false to true. Your program forks into background and bumblebee do not know anything about it.

Video tearing

Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, make sure nvidia-settings is installed and run:

$ optirun nvidia-settings -c :8

X Server XVideo Settings -> Sync to VBlank and OpenGL Settings -> Sync to VBlank should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. mplayer-vaapi and with -vsync parameter).

Refer to Intel graphics#Tearing on how to fix tearing on the Intel card.

If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.

Bumblebee cannot connect to socket

You might get something like:

$ optirun glxspheres64

or (for 32 bit):

$ optirun glxspheres32
[ 1648.179533] [ERROR]You have no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?

If you are already in the bumblebee group (groups | grep bumblebee), you may try removing the socket /var/run/bumblebeed.socket.

Another reason for this error could be that you have not actually turned on both GPUs in your BIOS, and as a result, the Bumblebee daemon is in fact not running. Check the BIOS settings carefully and be sure Intel graphics (integrated graphics - may be abbreviated in BIOS as something like igfx) has been enabled or set to auto, and that it is the primary GPU. Your display should be connected to the onboard integrated graphics, not the discrete graphics card.

If you mistakenly had the display connected to the discrete graphics card and Intel graphics was disabled, you probably installed Bumblebee after first trying to run NVIDIA alone. In this case, be sure to remove the /etc/X11/xorg.conf or /etc/X11/xorg.conf.d/20-nvidia.conf configuration files. If Xorg is instructed to use NVIDIA in a configuration file, X will fail.

Running X.org from console after login (rootless X.org)

See Xorg#Rootless Xorg.

Using Primus causes a segmentation fault

In some instances, using primusrun instead of optirun will result in a segfault. This is caused by an issue in code auto-detecting faster upload method, see FS#58933.

The workaround is skipping auto-detection by manually setting PRIMUS_UPLOAD environment variable to either 1 or 2, depending on which one is faster on your setup.

$ PRIMUS_UPLOAD=1 primusrun ...

Primusrun mouse delay (disable VSYNC)

For primusrun, VSYNC is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test primusrun with VSYNC disabled:

$ vblank_mode=0 primusrun glxgears

If you are satisfied with the above setting, create an alias (e.g. alias primusrun="vblank_mode=0 primusrun").

Performance comparison:

VSYNC enabled FPS Score Min FPS Max FPS
FALSE 31.5 793 22.3 54.8
TRUE 31.4 792 18.7 54.2

Tested with ASUS N550JV notebook and benchmark app unigine-heavenAUR.

Note: To disable vertical synchronization system-wide, see Intel graphics#Disable Vertical Synchronization (VSYNC).

Primus issues under compositing window managers

Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended.[4] If you need to use primus with compositing and see flickering or bad performance, synchronizing primus' display thread with the application's rendering thread may help:

$ PRIMUS_SYNC=1 primusrun ...

This makes primus display the previously rendered frame.

Problems with bumblebee after resuming from standby

In some systems, it can happens that the nvidia module is loaded after resuming from standby. One possible solution for this is to install the acpi_call and acpi package.

Optirun does not work, no debug output

Users are reporting that in some cases, even though Bumblebee was installed correctly, running

$ optirun glxgears -info

gives no output at all, and the glxgears window does not appear. Any programs that need 3d acceleration crashes:

$ optirun bash
$ glxgears
Segmentation fault (core dumped)

Apparently it is a bug of some versions of virtualgl. So a workaround is to install primus and lib32-primus and use it instead:

$ primusrun glxspheres64
$ optirun -b primus glxspheres64

By default primus locks the framerate to the vrate of your monitor (usually 60 fps), if needed it can be unlocked by passing the vblank_mode=0 environment variable.

$ vblank_mode=0 primusrun glxspheres64

Usually there is no need to display more frames han your monitor can handle, but you might want to for benchmarking or to have faster reactions in games (e.g., if a game need 3 frames to react to a mouse movement with vblank_mode=0 the reaction will be as quick as your system can handle, without it will always need 1/20 of second).

You might want to edit /etc/bumblebee/bumblebee.conf to use the primus render as default. If after an update you want to check if the bug has been fixed just use optirun -b virtualgl.

See this forum post for more information.

Broken power management with kernel 4.8

This article or section is a candidate for merging with Hybrid graphics#Using bbswitch.

Notes: Keep all info about bbswitch in one place. (Discuss in Talk:Bumblebee)

If you have a newer laptop (BIOS date 2015 or newer), then Linux 4.8 might break bbswitch (bbswitch issue 140) since bbswitch does not support the newer, recommended power management method. As a result, the GPU may fail to power on, fail to power off or worse.

As a workaround, add pcie_port_pm=off to your Kernel parameters.

Alternatively, if you are only interested in power saving (and perhaps use of external monitors), remove bbswitch and rely on Nouveau runtime power-management (which supports the new method).

Note: Some tools such as powertop --auto-tune automatically enable power management on PCI devices, which leads to the same problem [5]. Use the same workaround or do not use the all-in-one power management tools.

Lockup issue (lspci hangs)

See NVIDIA Optimus#Lockup issue (lspci hangs) for an issue that affects new laptops with a GTX 965M (or alike).

Discrete card always on and acpi warnings

Add acpi_osi=Linux to your Kernel parameters. See [6] and [7] for more information.

Screen 0 deleted because of no matching config section

Modify the configuration as follows:

/etc/bumblebee/xorg.conf.nvidia
...
Section "ServerLayout"
...
    Screen 0       "nvidia"
...
EndSection
...
Section "Screen"
    Identifier     "nvidia"
    Device         "DiscreteNvidia"
EndSection
...

Erratic, unpredictable behaviour

If Bumblebee starts/works in a random manner, check that you have set your Network configuration#Local network hostname resolution (details here).

Discrete card always on and nvidia driver cannot be unloaded

Make sure nvidia-persistenced.service is disabled and not currently active. It is intended to keep the nvidia driver running at all times [8], which prevents the card being turned off.

Discrete card is silently activated when EGL is requested by some application

If the discrete card is activated by some program (e.g. mpv with its GPU backend), it might stays on. The problem might be libglvnd which is loading the nvidia drivers and activating the card.

To disable this set environment variable __EGL_VENDOR_LIBRARY_FILENAMES (see documentation) to only load mesa configuration file:

__EGL_VENDOR_LIBRARY_FILENAMES="/usr/share/glvnd/egl_vendor.d/50_mesa.json"

nvidia-utils (and its branches) is installing the configuration file at /usr/share/glvnd/egl_vendor.d/10_nvidia.json which has priority and causes libglvnd to load the nvidia drivers and enable the card.

The other solution is to avoid installing the configuration file provided by nvidia-utils.

Framerate drops to 1 FPS after a fixed period of time

With the nvidia 440.36 driver, the DPMS setting is enabled by default resulting in a timeout after a fixed period of time (e.g. 10 minutes) which causes the frame rate to throttle down to 1 FPS. To work around this, add the following line to the "Device" section in /etc/bumblebee/xorg.conf.nvidia

Option "HardDPMS" "false"

Application cannot record screen

Using Bumblebee, applications cannot access the screen to identify and record it. This happens, for example, using obs-studio with NVENC activated. To solve this, disable the bridging mode with optirun -b none command.

See also