Difference between revisions of "Bumblebee"

From ArchWiki
Jump to: navigation, search
m (Installation: wiki markup sucks)
(Enable NVIDIA card during shutdown: rewrite for clarity)
 
(273 intermediate revisions by 89 users not shown)
Line 1: Line 1:
 
[[Category:Graphics]]
 
[[Category:Graphics]]
[[Category:X Server]]
+
[[Category:X server]]
 
[[es:Bumblebee]]
 
[[es:Bumblebee]]
 
[[fr:Bumblebee]]
 
[[fr:Bumblebee]]
 
[[it:Bumblebee]]
 
[[it:Bumblebee]]
 +
[[ja:Bumblebee]]
 
[[ru:Bumblebee]]
 
[[ru:Bumblebee]]
 
[[tr:Bumblebee]]
 
[[tr:Bumblebee]]
 
[[zh-CN:Bumblebee]]
 
[[zh-CN:Bumblebee]]
 +
{{Related articles start}}
 +
{{Related|NVIDIA Optimus}}
 +
{{Related|Nouveau}}
 +
{{Related|NVIDIA}}
 +
{{Related|Intel graphics}}
 +
{{Related articles end}}
 
From Bumblebee's [https://github.com/Bumblebee-Project/Bumblebee/wiki/FAQ FAQ]:
 
From Bumblebee's [https://github.com/Bumblebee-Project/Bumblebee/wiki/FAQ FAQ]:
  
''Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.''
+
"''Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer.''"
  
 
== Bumblebee: Optimus for Linux ==
 
== Bumblebee: Optimus for Linux ==
  
[http://www.nvidia.com/object/optimus_technology.html Optimus Technology] is an ''[http://hybrid-graphics-linux.tuxfamily.org/index.php?title=Hybrid_graphics hybrid graphics]'' implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life.
+
[http://www.nvidia.com/object/optimus_technology.html Optimus Technology] is a ''[http://hybrid-graphics-linux.tuxfamily.org/index.php?title=Hybrid_graphics hybrid graphics]'' implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.  
  
 
Bumblebee is a software implementation comprising of two parts:
 
Bumblebee is a software implementation comprising of two parts:
  
 
* Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
 
* Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
* Disable the dedicated video card when it is not in use (see the [[#Power Management]] section)
+
* Disable the dedicated video card when it is not in use (see the [[#Power management]] section)
  
 
It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.
 
It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.
  
{{Warning|Bumblebee is still under heavy development! But your help is very welcome.}}
+
== Installation ==
  
==Installation==
+
Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS doesn't have to provide this option). If neither "Optimus" or "switchable" is in the bios, still make sure both gpu's will be enabled and that the integrated graphics (igfx) is initial display (primary display). The display should be connected to the onboard integrated graphics, not the discrete graphics card. If integrated graphics had previously been disabled and discrete graphics drivers installed, be sure to remove {{ic|/etc/X11/xorg.conf}} or the conf file in {{ic|/etc/X11/xorg.conf.d}} related to the discrete graphics card.
  
Before installing Bumblebee check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS doesn't have to provide this option), and install the [[Intel|intel driver]] for the secondary on board graphics card.
+
=== Installing Bumblebee with Intel/NVIDIA ===
  
Several packages are available for a complete setup:
+
Install:
 +
* {{Pkg|bumblebee}} - The main package providing the daemon and client programs.
 +
: {{Note|{{Pkg|bumblebee}} depends on {{Pkg|mesa-libgl}} and provides all {{Pkg|nvidia-libgl}}, {{Pkg|nvidia-340xx-libgl}} and {{Pkg|nvidia-304xx-libgl}} to avoid dependency conflict between the respective libgl versions.}}
 +
* {{Pkg|mesa}} - An open-source implementation of the '''OpenGL''' specification.
 +
* {{Pkg|xf86-video-intel}} - Intel driver.
 +
* {{Pkg|nvidia}} or {{Pkg|nvidia-340xx}} or {{Pkg|nvidia-304xx}} - Install appropriate NVIDIA driver. For more information read [[NVIDIA#Installation]].
  
* {{aur|bumblebee}} - the main package providing the daemon and client programs.
+
For 32-bit ([[Multilib]] must be enabled) applications support on 64-bit machines, install:
* (optional) {{aur|bbswitch}} (or {{aur|dkms-bbswitch}}) - recommended for saving power by disable the Nvidia card
+
* {{Pkg|lib32-virtualgl}} - A render/display bridge for 32 bit applications.
* (optional) If you want more than just saving power, that is rendering programs on the discrete Nvidia card you also need:
+
* {{Pkg|lib32-nvidia-utils}} or {{Pkg|lib32-nvidia-340xx-utils}} or {{Pkg|lib32-nvidia-304xx-utils}} - match the version of the 64 bit package.
** a driver for the Nvidia card. The open-source {{ic|nouveau}} driver or the more closed-source {{ic|nvidia}} driver. See the subsection.
+
* {{Pkg|lib32-mesa-libgl}} and make sure that {{pkg|lib32-nvidia-libgl}} is '''not''' installed
** a render/display bridge. Two packages are currently available for that, {{aur|primus-git}} and {{aur|virtualgl}}. Only one of them is necessary, but installing them side-by-side does not hurt.
+
  
{{Note|If you want to run a 32-bit application on a 64-bit system you must install the proper lib32-* libraries for the program. In addition to this, you also need to install {{AUR|lib32-virtualgl}} or {{aur|lib32-primus-git}}, depending on your choice for the render bridge}}
+
In order to use Bumblebee, it is necessary to add your regular ''user'' to the {{ic|bumblebee}} group:
  
=== Installing Bumblebee with Intel / nvidia ===
+
# gpasswd -a ''user'' bumblebee
  
{{Warning|Don't install the original {{Pkg|nvidia-utils}} with Bumblebee - it will break your system!}}
+
Also [[enable]] {{ic|bumblebeed.service}}. Reboot your system and follow [[#Usage]].
  
* Install the special nvidia package {{aur|nvidia-utils-bumblebee}} for bumblebee from [[AUR]]. If you want to run 32-bit applications (like games with wine) on a 64-bit system you need the {{AUR|lib32-nvidia-utils-bumblebee}} package too.
+
=== Installing Bumblebee with Intel/Nouveau ===
* Install the kernel module {{AUR|nvidia-bumblebee}}. Unlike {{pkg|nvidia}}, this package does not depend on {{pkg|nvidia-utils}}. If you install {{AUR|dkms-nvidia}} or {{pkg|nvidia}}, do not continue upgrading if you are asked to replace {{aur|nvidia-utils-bumblebee}} by {{pkg|nvidia-utils}}.
+
  
=== Installing Bumblebee with Intel / nouveau ===
+
{{Warning|This method is deprecated and [https://github.com/Bumblebee-Project/Bumblebee/issues/773 will not work anymore]. Use the nvidia module instead. If you want nouveau, use [[PRIME]].}}
  
Install nouveau and required packages first:
+
Install:
{{bc|# pacman -S xf86-video-nouveau nouveau-dri mesa}}
+
* {{Pkg|xf86-video-nouveau}} - experimental 3D acceleration driver.
 +
* {{Pkg|mesa}} - Mesa classic DRI with Gallium3D drivers and 3D graphics libraries.
  
* {{Pkg|xf86-video-nouveau}} experimental 3D acceleration driver
+
{{Note|1=If, when using {{ic|primusrun}} on a system with the nouveau driver, you are getting:
* {{Pkg|nouveau-dri}} Mesa classic DRI + Gallium3D drivers
+
primus: fatal: failed to load any of the libraries: /usr/$LIB/nvidia/libGL.so.1
* {{Pkg|mesa}} Mesa 3-D graphics libraries
+
/usr/$LIB/nvidia/libGL.so.1: Cannot open shared object file: No such file or directory
  
==Start Bumblebee==
+
You should add the following in {{ic|/usr/bin/primus}} after {{ic|PRIMUS_libGL}}:
 +
export PRIMUS_libGLa='/usr/$LIB/libGL.so.1'
  
In order to use Bumblebee it is necessary add yourself (and other users) to the bumblebee group:
+
If you want, create a new script (for example ''primusnouveau'').
 +
}}
  
# gpasswd -a $USER bumblebee
+
== Usage ==
  
where {{ic|$USER}} is the login name of the user to be added. Then log off and on again to apply the group changes.
+
=== Test ===
  
To start bumblebee automatically at startup, enable {{ic|bumblebeed}} service:
+
Install {{Pkg|mesa-demos}} and use {{ic|glxgears}} to test if if Bumblebee works with your Optimus system:
 +
$ optirun glxgears -info
  
# systemctl enable bumblebeed.service
+
If it fails, try the following commands:
 
+
Finished - reboot system and use the shell program {{ic|[[#Usage|optirun]]}} for Optimus NVIDIA rendering!
+
 
+
== Usage ==
+
  
The command line programm {{ic|optirun}} shipped with Bumblebee is your best friend
+
*64 bit system:
for running applications on your Optimus NVIDIA card.
+
$ optirun glxspheres64
 +
*32 bit system:
 +
$ optirun glxspheres32
  
Test Bumblebee if it works with your Optimus system:
+
If the window with animation shows up, Optimus with Bumblebee is working.
{{bc|$ optirun glxgears}}
+
  
If it succeeds and the terminal you are running from mentions something about your NVIDIA - Optimus with Bumblebee is working!
+
{{Note|If {{ic|glxgears}} failed, but {{ic|glxspheres''XX''}} worked, always replace "{{ic|glxgears}}" with "{{ic|glxspheres''XX''}}" in all cases.}}
  
General Usage:
+
=== General usage ===
  
{{bc|$ optirun [options] <application> [application-parameters]}}
+
$ optirun [options] ''application'' [application-parameters]
  
Some Examples:
+
For example, start Windows applications with Optimus:
  
Start Windows applications with Optimus:
+
$ optirun wine application.exe
  
{{bc|$ optirun wine <windows application>.exe}}
+
For another example, open NVIDIA Settings panel with Optimus:
  
Use NVIDIA Settings with Optimus:
+
$ optirun -b none nvidia-settings -c :8
  
{{bc|$ optirun nvidia-settings -c :8 }}
+
: {{Note|A patched version of {{Pkg|nvdock}} is available in the package {{AUR|nvdock-bumblebee}}}}
  
For a list of options for {{ic|optirun}} view its manual page:
+
For a list of the options for {{ic|optirun}}, view its manual page:
{{bc|$ man optirun}}
+
  
A new program is soon becoming the default choice because of better performance, namely
+
$ man optirun
primus. Currently you need to run this program separately (it does not accept options
+
unlike {{ic|optirun}}), but in the future it will be started by optirun. Usage:
+
{{bc|$ primusrun glxgears}}
+
  
 
== Configuration ==
 
== Configuration ==
Line 104: Line 111:
 
You can configure the behaviour of Bumblebee to fit your needs. Fine tuning like speed optimization, power management and other stuff can be configured in {{ic|/etc/bumblebee/bumblebee.conf}}
 
You can configure the behaviour of Bumblebee to fit your needs. Fine tuning like speed optimization, power management and other stuff can be configured in {{ic|/etc/bumblebee/bumblebee.conf}}
  
=== Optimizing Speed when using VirtualGL as bridge ===
+
=== Optimizing speed ===
  
Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server.
+
==== Using VirtualGL as bridge ====
  
Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:
+
Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:
  
To use an other compression method for a single application:
+
To use another compression method for a single application:
  
  $ optirun -c <compress-method> application
+
  $ optirun -c ''compress-method'' application
  
The method of compres will affect performance in the GPU/GPU usage. Compressed methods (such as {{ic|jpeg}}) will load the CPU the most but will load GPU the minimum necessary; uncompressed methods loads the most on GPU and the CPU will have the minimum load possible.
+
The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.
  
Compressed Methods are: {{ic|jpeg}}, {{ic|rgb}}, {{ic|yuv}}
+
Compressed methods
 +
:*{{ic|jpeg}}
 +
:*{{ic|rgb}}
 +
:*{{ic|yuv}}
  
Uncompressed Methods are: {{ic|proxy}}, {{ic|xv}}
+
Uncompressed methods
 +
:*{{ic|proxy}}
 +
:*{{ic|xv}}
  
To use a standard compression for all applications set the {{ic|VGLTransport}} to {{ic|<compress-method>}} in {{ic|/etc/bumblebee/bumblebee.conf}}
+
Here is a performance table tested with [[ASUS N550JV]] laptop and benchmark app {{AUR|unigine-heaven}}:
  
{{hc|/etc/bumblebee/bumblebee.conf|<nowiki>
+
{| class="wikitable"
...
+
! Command !! FPS !! Score !! Min FPS !! Max FPS
 +
|-
 +
| optirun unigine-heaven || 25.0 || 630 || 16.4 || 36.1
 +
|-
 +
| optirun -c jpeg unigine-heaven || 24.2 || 610 || 9.5 || 36.8
 +
|-
 +
| optirun -c rgb unigine-heaven || 25.1 || 632 || 16.6 || 35.5
 +
|-
 +
| optirun -c yuv unigine-heaven || 24.9 || 626 || 16.5 || 35.8
 +
|-
 +
| optirun -c proxy unigine-heaven || 25.0 || 629 || 16.0 || 36.1
 +
|-
 +
| optirun -c xv unigine-heaven || 22.9 || 577 || 15.4 || 32.2
 +
|}
 +
{{Note|Lag spikes occurred when {{ic|jpeg}} compression method was used.}}
 +
 
 +
To use a standard compression for all applications, set the {{ic|VGLTransport}} to {{ic|''compress-method''}} in {{ic|/etc/bumblebee/bumblebee.conf}}:
 +
 
 +
{{hc|/etc/bumblebee/bumblebee.conf|2=
 +
[...]
 
[optirun]
 
[optirun]
 
VGLTransport=proxy
 
VGLTransport=proxy
...
+
[...]
</nowiki>}}
+
}}
  
 
You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting {{ic|VGL_READBACK}} environment variable to {{ic|pbo}} should increase the performance. Compare these two:
 
You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting {{ic|VGL_READBACK}} environment variable to {{ic|pbo}} should increase the performance. Compare these two:
  
 
  # PBO should be faster.
 
  # PBO should be faster.
  VGL_READBACK=pbo optirun glxspheres
+
  VGL_READBACK=pbo optirun glxgears
 
  # The default value is sync.
 
  # The default value is sync.
  VGL_READBACK=sync optirun glxspheres
+
  VGL_READBACK=sync optirun glxgears
  
 
{{Note|CPU frequency scaling will affect directly on render performance}}
 
{{Note|CPU frequency scaling will affect directly on render performance}}
  
=== Power Management ===
+
==== Primusrun ====
  
The goal of power management feature is to turn off the NVIDIA card when it is not used by bumblebee any more.
+
{{Note|Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended. See [[#Primus issues under compositing window managers]].}}
If bbswitch is installed, it will be detected automatically when the Bumblebee daemon starts. No additional
+
{{ic|primusrun}} (from package {{Pkg|primus}}) is becoming the default choice, because it consumes less power and sometimes provides better performance than {{ic|optirun}}/{{ic|virtualgl}}. It may be run separately, but it does not accept options as {{ic|optirun}} does. Setting {{ic|primus}} as the bridge for {{ic|optirun}} provides more flexibility.
configuration is necessary.
+
 
 +
For 32-bit applications support on 64-bit machines, install {{Pkg|lib32-primus}} ([[multilib]] must be enabled).
 +
 
 +
Usage (run separately):
 +
$ primusrun glxgears
 +
 
 +
Usage (as a bridge for {{ic|optirun}}):
 +
 
 +
The default configuration sets {{ic|virtualgl}} as the bridge. Override that on the command line:
 +
$ optirun -b primus glxgears
 +
 
 +
Or, set {{ic|1=Bridge=primus}} in {{ic|/etc/bumblebee/bumblebee.conf}} and you won't have to specify it on the command line.
 +
 
 +
{{Tip|Refer to [[#Primusrun mouse delay/disable VSYNC]] if you want to disable {{ic|VSYNC}}. It can also remove mouse input delay lag and slightly increase the performance.}}
 +
 
 +
=== Power management ===
 +
 
 +
The goal of the power management feature is to turn off the NVIDIA card when it is not used by Bumblebee any more. If {{Pkg|bbswitch}} (or {{Pkg|bbswitch-dkms}}) is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary. However, {{Pkg|bbswitch}} is for [https://bugs.launchpad.net/ubuntu/+source/bbswitch/+bug/1338404/comments/6 Optimus laptops only and will not work on desktop computers]. So, Bumblebee power management is not available for desktop computers, and there is no reason to install {{Pkg|bbswitch}} on a desktop. (Nevertheless, the other features of Bumblebee do work on some desktop computers.)
  
 
==== Default power state of NVIDIA card using bbswitch ====
 
==== Default power state of NVIDIA card using bbswitch ====
  
The default behavior of bbswitch is to leave the card power state unchanged. {{ic|bumblebeed}} does disable
+
The default behavior of bbswitch is to leave the card power state unchanged. {{ic|bumblebeed}} does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.
the card when started, so the following is only necessary if you use bbswitch without bumblebeed.
+
  
 
Set {{ic|load_state}} and {{ic|unload_state}} module options according to your needs (see [https://github.com/Bumblebee-Project/bbswitch bbswitch documentation]).
 
Set {{ic|load_state}} and {{ic|unload_state}} module options according to your needs (see [https://github.com/Bumblebee-Project/bbswitch bbswitch documentation]).
{{hc|/etc/modprobe.d/bbswitch.conf|<nowiki>
+
{{hc|/etc/modprobe.d/bbswitch.conf|2=
 
options bbswitch load_state=0 unload_state=1
 
options bbswitch load_state=0 unload_state=1
</nowiki>}}
+
}}
  
 
==== Enable NVIDIA card during shutdown ====
 
==== Enable NVIDIA card during shutdown ====
 +
On some laptops, the NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. Therefore the Bumblebee daemon will power on the GPU when stopping the daemon (e.g. on shutdown) due to the (default) setting {{ic|TurnCardOffAtExit&#61;false}} in {{ic|/etc/bumblebee/bumblebee.conf}}. Note that this setting does not influence power state while the daemon is running, so if all {{ic|optirun}} or {{ic|primusrun}} programs have exited, the GPU will still be powered off.
  
The NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown.  One option is to set {{ic|TurnCardOffAtExit&#61;false}} in {{ic|/etc/bumblebee/bumblebee.conf}}, however this will enable the card everytime you stop the Bumblebee daemon, even if done manually.  To ensure that the NVIDIA card is always powered on during shutdown, add the following [[Boot process#Custom_hooks|hook function]] (if using {{AUR|bbswitch}}):
+
When you stop the daemon manually, you might want to keep the card powered off while still powering it on on shutdown. To achieve the latter, add the following [[systemd]] service (if using {{pkg|bbswitch}}):
  
{{hc|/etc/rc.d/functions.d/nvidia-card-enable|<nowiki>
+
{{hc|/etc/systemd/system/nvidia-enable.service|2=
nvidia_card_enable() {
+
[Unit]
  BBSWITCH=/proc/acpi/bbswitch
+
Description=Enable NVIDIA card
 +
DefaultDependencies=no
  
  stat_busy "Enabling NVIDIA GPU"
+
[Service]
 +
Type=oneshot
 +
ExecStart=/bin/sh -c 'echo ON > /proc/acpi/bbswitch'
  
  if [ -w ${BBSWITCH} ]; then
+
[Install]
    echo ON > ${BBSWITCH}
+
WantedBy=shutdown.target
    stat_done
+
}}
  else
+
 
    stat_fail
+
Then enable the service by running {{ic|systemctl enable nvidia-enable.service}} at the root prompt.
  fi
+
 
}
+
==== Enable NVIDIA card after waking from suspend ====
 +
The bumblebee daemon may fail to activate the graphics card after suspending. A possible fix involves setting {{Pkg|bbswitch}} as the default method for power management in {{ic|/etc/bumblebee/bumblebee.conf}}:
 +
 
 +
{{hc|/etc/bumblebee/bumblebee.conf|2=
 +
[driver-nvidia]
 +
PMmethod=bbswitch
 +
 
 +
# ...
 +
 
 +
[driver-nouveau]
 +
PMmethod=bbswitch
 +
}}
  
add_hook shutdown_poweroff nvidia_card_enable
+
{{Note|This fix seems to work only after rebooting the system. Restarting the bumblebee service is not enough.}}
</nowiki>}}
+
  
 
=== Multiple monitors ===
 
=== Multiple monitors ===
  
{{Note|This configuration is only valid for laptops, where the extra output is hardwired to the intel card. Unfortunately this is not the case for some (or most?) laptops, where, lets say, the HDMI output is hardwired to the NVIDIA card. In that case there is no such an ideal solution, as shown here. But you can make your extra output at least usable with the instructions on the bumblebee [https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup wiki page].}}
+
==== Outputs wired to the Intel chip ====
  
You can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.
+
If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.
  
{{hc|/etc/X11/xorg.conf|<nowiki>
+
{{hc|/etc/X11/xorg.conf|2=
 
Section "Screen"
 
Section "Screen"
 
     Identifier    "Screen0"
 
     Identifier    "Screen0"
Line 233: Line 294:
 
     BusID          "PCI:0:2:0"
 
     BusID          "PCI:0:2:0"
 
EndSection
 
EndSection
</nowiki>}}
 
  
You need to probably change the BusID:
+
Section "Device"
 +
    Identifier "nvidiagpu1"
 +
    Driver "nvidia"
 +
    BusID "PCI:0:1:0"
 +
EndSection
 +
 
 +
}}
 +
 
 +
You need to probably change the BusID for both the Intel and the NVIDIA card.
  
 
{{hc|<nowiki>$ lspci | grep VGA</nowiki>|
 
{{hc|<nowiki>$ lspci | grep VGA</nowiki>|
Line 243: Line 311:
 
The BusID is 0:2:0
 
The BusID is 0:2:0
  
==CUDA Without Bumblebee==
+
==== Output wired to the NVIDIA chip ====
 +
 
 +
On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaneously, you have to run 2 X Servers. The first will be using the Intel driver for the notebooks panel and a display connected on VGA. The second will be started through optirun on the NVIDIA card, and drives the digital display.
 +
 
 +
''intel-virtual-output'' is a tool provided in the {{Pkg|xf86-video-intel}} driver set, as of v2.99. When run in a terminal, it will daemonize itself unless the {{ic|-f}} switch is used. Once the tool is running, it activates Bumblebee (Bumblebee can be left as default install), and any displays attached will be automatically detected, and manageable via any desktop display manager such as xrandr or KDE Display. See the [https://github.com/Bumblebee-Project/Bumblebee/wiki/Multi-monitor-setup Bumblebee wiki page] for more information.
 +
 
 +
{{Note|In {{ic|/etc/bumblebee/xorg.conf.nvidia}} change the lines {{ic|UseEDID}} and {{ic|Option "AutoAddDevices" "false"}} to {{ic|"true"}}, if you are having trouble with device resolution detection. You will also need to comment out the line {{ic|Option "UseDisplayDevices" "none"}} in order to use the display connected to the NVIDIA GPU.}}
 +
 
 +
Commandline usage is as follows:
 +
 
 +
intel-virtual-output [OPTION]... [TARGET_DISPLAY]...
 +
  -d <source display>  source display
 +
  -f                  keep in foreground (do not detach from console and daemonize)
 +
  -b                  start bumblebee
 +
  -a                  connect to all local displays (e.g. :1, :2, etc)
 +
  -S                  disable use of a singleton and launch a fresh intel-virtual-output process
 +
  -v                  all verbose output, implies -f
 +
  -V <category>        specific verbose output, implies -f
 +
  -h                  this help
 +
 
 +
If no target displays are parsed on the commandline, ''intel-virtual-output'' will attempt to connect to any local display and then start bumblebee.[http://cgit.freedesktop.org/xorg/driver/xf86-video-intel/tree/tools/]
 +
 
 +
The advantage of using ''intel-virtual-output'' in foreground mode is that once the external display is disconnected, ''intel-virtual-output'' can then be killed and bumblebee will disable the nvidia chip. Games can be run on the external screen by first exporting the display {{ic|1=export DISPLAY=:8}}, and then running the game with {{ic|optirun ''game_bin''}}, however, cursor and keyboard are not fully captured. Use {{ic|1=export DISPLAY=:0}} to revert back to standard operation.
 +
 
 +
== Switch between discrete and integrated like Windows==
 +
 
 +
In Windows, the way that Optimus works is NVIDIA has a whitelist of applications that require Optimus for, and you can add applications to this whitelist as needed. When you launch the application, it automatically decides which card to use.
 +
 
 +
To mimic this behavior in Linux, you can use {{AUR|libgl-switcheroo-git}}{{Broken package link|{{aur-mirror|libgl-switcheroo-git}}}}. After installing, you can add the below in your .xprofile.
 +
 
 +
{{hc|~/.xprofile|2=
 +
mkdir -p /tmp/libgl-switcheroo-$USER/fs
 +
gtkglswitch &
 +
libgl-switcheroo /tmp/libgl-switcheroo-$USER/fs &
 +
}}
 +
 
 +
To enable this, you must add the below to the shell that you intend to launch applications from (I simply added it to the .xprofile file)
 +
export LD_LIBRARY_PATH=/tmp/libgl-switcheroo-$USER/fs/\$LIB${LD_LIBRARY_PATH+:}$LD_LIBRARY_PATH
 +
 
 +
Once this has all been done, every application you launch from this shell will pop up a GTK+ window asking which card you want to run it with (you can also add an application to the whitelist in the configuration). The configuration is located in {{ic|$XDG_CONFIG_HOME/libgl-switcheroo.conf}}, usually {{ic|~/.config/libgl-switcheroo.conf}}
 +
 
 +
{{Note|This tool acts by making a FUSE file system and then adding it into the dynamic library searching path, which may lead to slow speed or even segmentation faults when launching a software.}}
 +
 
 +
== CUDA without Bumblebee==
 +
 
 +
You can use CUDA without bumblebee. All you need to do is ensure that the nvidia card is on:
 +
 
 +
  # tee /proc/acpi/bbswitch <<< ON
 +
 
 +
Now when you start a CUDA application it is going to automatically load all the necessary modules.
 +
 
 +
To turn off the nvidia card after using CUDA do:
  
This is not well documented, but you do not need Bumblebee to use CUDA and it may work even on machines where optirun fails. For a guide on how to get it working with the Lenovo IdeaPad Y580 (which uses the GeForce 660M), see: https://wiki.archlinux.org/index.php/Lenovo_IdeaPad_Y580#NVIDIA_Card. Those instructions are very likely to work with other machines (except for the acpi-handle-hack part, which may not be necessary).
+
  # rmmod nvidia_uvm
 +
  # rmmod nvidia
 +
  # tee /proc/acpi/bbswitch <<< OFF
  
==Troubleshooting==
+
== Troubleshooting ==
  
{{Note|Please report bugs at [https://github.com/Bumblebee-Project/Bumblebee Bumblebee-Project]'s GitHub tracker as described in its [https://github.com/Bumblebee-Project/Bumblebee/wiki/Reporting-Issues Wiki].}}
+
{{Note|Please report bugs at [https://github.com/Bumblebee-Project/Bumblebee Bumblebee-Project]'s GitHub tracker as described in its [https://github.com/Bumblebee-Project/Bumblebee/wiki/Reporting-Issues wiki].}}
  
 
=== [VGL] ERROR: Could not open display :8 ===
 
=== [VGL] ERROR: Could not open display :8 ===
Line 255: Line 376:
 
There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic")
 
There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic")
  
A workaround for this problem is:
+
This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:
  
{{bc|
+
$ optirun -b primus wine ''windows program''.exe
$ optirun bash
+
$ optirun wine <windows program>.exe
+
}}
+
  
If using NVIDA drivers a fix for this problem is to edit {{ic|/etc/bumblebee/xorg.conf.nvidia}} and change Option {{ic|ConnectedMonitor}} to {{ic|CRT-0}}.
+
If this does not work, an alternative walkaround for this problem is:
  
=== [ERROR]Cannot access secondary GPU ===
+
$ optirun bash
 +
$ optirun wine ''windows program''.exe
  
==== No devices detected. ====
+
If using NVIDIA drivers a fix for this problem is to edit {{ic|/etc/bumblebee/xorg.conf.nvidia}} and change Option {{ic|ConnectedMonitor}} to {{ic|CRT-0}}.
  
In some instances, running optirun will return:
+
=== Xlib: extension "GLX" missing on display ":0.0" ===
  
{{bc|
+
If you tried to install the NVIDIA driver from NVIDIA website, this is not going to work.
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.
+
[ERROR]Aborting because fallback start is disabled.
+
}}
+
  
In this case, you will need to move the file {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to somewhere else. Restart the bumblebeed daemon, and it should work.
+
1. Uninstall that driver in the similar way:
 +
# ./NVIDIA-Linux-*.run --uninstall
 +
2. Remove generated by NVIDIA Xorg configuration file:
 +
# rm /etc/X11/xorg.conf
 +
3. (Re)install the correct NVIDIA driver: [[#Installing Bumblebee with Intel/NVIDIA]]
 +
 
 +
=== [ERROR]Cannot access secondary GPU: No devices detected ===
 +
 
 +
In some instances, running {{ic|optirun}} will return:
 +
 
 +
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.
 +
[ERROR]Aborting because fallback start is disabled.
 +
 
 +
In this case, you will need to move the file {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to somewhere else, [[restart]] the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge {{ic|/etc/X11/xorg.conf.d/20-intel.conf}} to {{ic|/etc/X11/xorg.conf}}.
 
   
 
   
 
It could be also necessary to comment the driver line in {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.
 
It could be also necessary to comment the driver line in {{ic|/etc/X11/xorg.conf.d/10-monitor.conf}}.
  
If you're using the nouveau driver you could try switching to the nVidia driver.
+
If you're using the {{ic|nouveau}} driver you could try switching to the {{ic|nvidia}} driver.
 +
 
 +
You might need to define the NVIDIA card somewhere (e.g. file {{ic|/etc/X11/xorg.conf.d}}), using the correct {{ic|BusID}} according to {{ic|lspci}} output:
 +
 
 +
{{bc|
 +
Section "Device"
 +
    Identifier "nvidiagpu1"
 +
    Driver "nvidia"
 +
    BusID "PCI:0:1:0"
 +
EndSection
 +
}}
 +
 
 +
Observe that the format of {{ic|lspci}} output is in HEX, while in xorg it is in decimals. So if the output of {{ic|lspci}} is, for example, {{ic|0a:00.0}} the {{ic|BusID}} should be {{ic|PCI:10:0:0}}.
 +
 
  
 
==== NVIDIA(0): Failed to assign any connected display devices to X screen 0 ====
 
==== NVIDIA(0): Failed to assign any connected display devices to X screen 0 ====
Line 285: Line 427:
 
If the console output is:
 
If the console output is:
  
{{bc|
+
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0
[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0
+
[ERROR]Aborting because fallback start is disabled.
[ERROR]Aborting because fallback start is disabled.
+
}}
+
  
 
You can change this line in {{ic|/etc/bumblebee/xorg.conf.nvidia}}:
 
You can change this line in {{ic|/etc/bumblebee/xorg.conf.nvidia}}:
 +
 +
Option "ConnectedMonitor" "DFP"
 +
 +
to:
 +
 +
Option "ConnectedMonitor" "CRT"
 +
 +
====  Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!) ====
 +
 +
Add {{ic|1=rcutree.rcu_idle_gp_delay=1}} to the [[kernel parameters]] of the [[bootloader]] configuration (see also the original [https://bbs.archlinux.org/viewtopic.php?id=169742 BBS post] for a configuration example).
 +
 +
==== Could not load GPU driver ====
 +
 +
If the console output is:
 +
 +
[ERROR]Cannot access secondary GPU - error: Could not load GPU driver
 +
 +
and if you try to load the nvidia module you get:
 +
 +
modprobe nvidia
 +
modprobe: ERROR: could not insert 'nvidia': Exec format error
 +
 +
This could be because the nvidia driver is out of sync with the Linux kernel, for example if you installed the latest nvidia driver and haven't updated the kernel in a while. A full system update might resolve the issue. If the problem persists you should try manually compiling the nvidia packages against your current kernel, for example with {{Pkg|nvidia-dkms}} or by compiling {{pkg|nvidia}} from the [[ABS]].
 +
 +
==== NOUVEAU(0): [drm] failed to set drm interface version ====
 +
 +
Consider switching to the official nvidia driver. As commented [https://github.com/Bumblebee-Project/Bumblebee/issues/438#issuecomment-22005923 here], nouveau driver has some issues with some cards and bumblebee.
 +
 +
=== /dev/dri/card0: failed to set DRM interface version 1.4: Permission denied ===
 +
 +
This could be worked around by appending following lines in {{ic|/etc/bumblebee/xorg.conf.nvidia}} (see [https://github.com/Bumblebee-Project/Bumblebee/issues/580 here]):
 
{{bc|
 
{{bc|
Option "ConnectedMonitor" "DFP"
+
Section "Screen"
}}
+
    Identifier "Default Screen"
to
+
    Device "DiscreteNvidia"
{{bc|
+
EndSection
Option "ConnectedMonitor" "CRT"
+
 
}}
 
}}
 +
 +
=== ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored ===
 +
 +
You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "For 32-bit..." section in [[#Installation]]. If the problem persists or if it is a 64-bit application, try using the [[#Primusrun|primus bridge]].
  
 
=== Fatal IO error 11 (Resource temporarily unavailable) on X server ===
 
=== Fatal IO error 11 (Resource temporarily unavailable) on X server ===
Line 305: Line 479:
 
=== Video tearing ===
 
=== Video tearing ===
  
Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for nvidia, run  
+
Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, run:
  
{{bc|$ optirun nvidia-settings -c :8 }}
+
$ optirun nvidia-settings -c :8
  
 
{{ic|1=X Server XVideo Settings -> Sync to VBlank}} and {{ic|1=OpenGL Settings -> Sync to VBlank}} should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. {{ic|mplayer-vaapi}} and with {{ic|-vsync}} parameter).
 
{{ic|1=X Server XVideo Settings -> Sync to VBlank}} and {{ic|1=OpenGL Settings -> Sync to VBlank}} should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. {{ic|mplayer-vaapi}} and with {{ic|-vsync}} parameter).
  
Refer to the [[Intel#Video_tearing|Intel]] article on how to fix tearing on the Intel card.
+
Refer to the [[Intel#Video_tearing|Intel]]{{Broken section link}} article on how to fix tearing on the Intel card.
  
 
If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.
 
If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.
  
=== It tells you you're not in the group, but you are ===
+
=== Bumblebee cannot connect to socket ===
First, check that you are actually in the group; {{ic|groups}}. If you aren't in the group add yourself(as above) and login and logout, try again.
+
 
 +
You might get something like:
 +
 
 +
$ optirun glxspheres64
 +
or (for 32 bit):
 +
{{hc|$ optirun glxspheres32|
 +
[ 1648.179533] [ERROR]You've no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
 +
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?
 +
}}
 +
 
 +
If you are already in the {{ic|bumblebee}} group ({{ic|<nowiki>$ groups | grep bumblebee</nowiki>}}), you may try [https://bbs.archlinux.org/viewtopic.php?pid=1178729#p1178729 removing the socket] {{ic|/var/run/bumblebeed.socket}}.
 +
 
 +
Another reason for this error could be that you haven't actually turned on both gpu's in your bios, and as a result, the Bumblebee daemon is in fact not running. Check the bios settings carefully and be sure intel graphics (integrated graphics - may be abbreviated in bios as something like igfx) has been enabled or set to auto, and that it's the primary gpu. Your display should be connected to the onboard integrated graphics, not the discrete graphics card.
 +
 
 +
If you mistakenly had the display connected to the discrete graphics card and intel graphics was disabled, you probably installed Bumblebee after first trying to run Nvidia alone. In this case, be sure to remove the /etc/X11/xorg.conf or .../20-nvidia... configuration files. If Xorg is instructed to use Nvidia in a conf file, X will fail.
 +
 
 +
=== Running X.org from console after login (rootless X.org) ===
 +
 
 +
See [[Xorg#Rootless Xorg (v1.16)]].
 +
 
 +
=== Primusrun mouse delay/disable VSYNC ===
 +
 
 +
For {{ic|primusrun}}, {{ic|VSYNC}} is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test {{ic|primusrun}} with {{ic|VSYNC}} disabled:
 +
 
 +
$ vblank_mode=0 primusrun glxgears
 +
 
 +
{{Style|Useless package, equivalent to {{ic|1=alias optiprime="vblank_mode=0 primusrun"}}. You can even use {{ic|1=alias primusrun="vblank_mode=0 primusrun"}}.}}
 +
 
 +
If you want to keep using it, install {{AUR|optiprime}} package, which provides a script for above command. Usage:
 +
 
 +
$ optiprime glxgears
 +
 
 +
Comparison:
 +
 
 +
{| class="wikitable"
 +
! Command !! FPS !! Score !! Min FPS !! Max FPS
 +
|-
 +
| optiprime unigine-heaven || 31.5 || 793 || 22.3 || 54.8
 +
|-
 +
| primusrun unigine-heaven || 31.4 || 792 || 18.7 || 54.2
 +
|}
 +
''Tested with [[ASUS N550JV]] laptop and benchmark app {{AUR|unigine-heaven}}.''
 +
 
 +
{{Note|To disable vertical synchronization system-wide, see [[Intel graphics#Disable Vertical Synchronization (VSYNC)]].}}
 +
 
 +
=== Primus issues under compositing window managers ===
 +
 
 +
Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended.[https://github.com/amonakov/primus#issues-under-compositing-wms]
 +
If you need to use primus with compositing and see flickering or bad performance, synchronizing primus' display thread with the application's rendering thread may help:
 +
 
 +
$ PRIMUS_SYNC=1 primusrun ...
 +
 
 +
This makes primus display the previously rendered frame.
 +
 
 +
=== Problems with bumblebee after resuming from standby ===
 +
 
 +
In some systems, it can happens that the nvidia module is loaded after resuming from standby.
 +
The solution for this, is to install the {{pkg|acpi_call}} and {{pkg|acpi}} package.
 +
 
 +
=== Optirun doesn't work, no debug output ===
 +
 
 +
Users are reporting that in some cases, even though Bumblebee was installed correctly, running
 +
 
 +
$ optirun glxgears -info
 +
 
 +
gives no output at all, and the glxgears window does not appear. Any programs that need 3d acceleration crashes:
 +
 
 +
$ optirun bash
 +
$ glxgears
 +
Segmentation fault (core dumped)
 +
 
 +
Apparently it is a bug of some versions of virtualgl. So a workaround is to [[install]] {{Pkg|primus}} and {{Pkg|lib32-primus}} and use it instead:
 +
 
 +
$ primusrun glxspheres64
 +
$ optirun -b primus glxspheres64
 +
 
 +
By default primus locks the framerate to the vrate of your monitor (usually 60 fps), if needed it can be unlocked by passing the {{ic|vblank_mode&#61;0}} environment variable.
 +
 
 +
$ vblank_mode=0 primusrun glxspheres64
 +
 
 +
Usually there is no need to display more frames han your monitor can handle, but you might want to for benchmarking or to have faster reactions in games (e.g., if a game need 3 frames to react to a mouse movement with {{ic|vblank_mode&#61;0}} the reaction will be as quick as your system can handle, without it will always need 1/20 of second).
 +
 
 +
You might want to edit {{ic|/etc/bumblebee/bumblebee.conf}} to use the primus render as default. If after an update you want to check if the bug has been fixed just use {{ic|optirun -b virtualgl}}.
 +
 
 +
See [https://bbs.archlinux.org/viewtopic.php?pid=1643609 this forum post] for more information.
 +
 
 +
=== Broken power management with kernel 4.8 ===
 +
If you have a newer laptop (BIOS date 2015 or newer), then Linux 4.8 might break bbswitch ([https://github.com/Bumblebee-Project/bbswitch/issues/140 bbswitch issue 140]) since bbswitch does not support the newer, recommended power management method. As a result, the dGPU may fail to power on, fail to power off or worse.
 +
 
 +
As a workaround, add {{ic|1=pcie_port_pm=off}} to your [[Kernel parameters]].
 +
 
 +
Alternatively, if you are only interested in power saving (and perhaps use of external monitors), remove bbswitch and rely on [[Nouveau]] runtime power-management (which supports the new method).
 +
 
 +
=== Lockup issue (lspci hangs) ===
 +
See [[NVIDIA_Optimus#Lockup_issue_.28lspci_hangs.29]]] for an issue that affects new laptops with a GTX 965M (or alike).
  
Otherwise removing {{ic|/var/run/bumblebeed.socket}} might help.[https://bbs.archlinux.org/viewtopic.php?pid=1178729#p1178729 (forum thread)]
+
== See also ==
  
== Important Links ==
+
* [http://www.bumblebee-project.org Bumblebee project repository]
* [http://www.bumblebee-project.org Bumblebee Project repository]
+
* [http://wiki.bumblebee-project.org/ Bumblebee project wiki]
* [http://wiki.bumblebee-project.org/ Bumblebee Project Wiki]
+
* [https://github.com/Bumblebee-Project/bbswitch Bumblebee project bbswitch repository]
* [https://github.com/Bumblebee-Project/bbswitch Bumblebee Project bbswitch repository]
+
  
Join us at #bumblebee at freenode.net
+
Join us at #bumblebee at freenode.net.

Latest revision as of 23:50, 12 November 2016

From Bumblebee's FAQ:

"Bumblebee is an effort to make NVIDIA Optimus enabled laptops work in GNU/Linux systems. Such feature involves two graphics cards with two different power consumption profiles plugged in a layered way sharing a single framebuffer."

Contents

Bumblebee: Optimus for Linux

Optimus Technology is a hybrid graphics implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and prolong the battery life. It has also been tested successfully with desktop machines with Intel integrated graphics and an nVidia dedicated graphics card.

Bumblebee is a software implementation comprising of two parts:

  • Render programs off-screen on the dedicated video card and display it on the screen using the integrated video card. This bridge is provided by VirtualGL or primus (read further) and connects to a X server started for the discrete video card.
  • Disable the dedicated video card when it is not in use (see the #Power management section)

It tries to mimic the Optimus technology behavior; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, automatically starting a program with the discrete video card based on workload is not implemented.

Installation

Before installing Bumblebee, check your BIOS and activate Optimus (older laptops call it "switchable graphics") if possible (BIOS doesn't have to provide this option). If neither "Optimus" or "switchable" is in the bios, still make sure both gpu's will be enabled and that the integrated graphics (igfx) is initial display (primary display). The display should be connected to the onboard integrated graphics, not the discrete graphics card. If integrated graphics had previously been disabled and discrete graphics drivers installed, be sure to remove /etc/X11/xorg.conf or the conf file in /etc/X11/xorg.conf.d related to the discrete graphics card.

Installing Bumblebee with Intel/NVIDIA

Install:

  • bumblebee - The main package providing the daemon and client programs.
Note: bumblebee depends on mesa-libgl and provides all nvidia-libgl, nvidia-340xx-libgl and nvidia-304xx-libgl to avoid dependency conflict between the respective libgl versions.

For 32-bit (Multilib must be enabled) applications support on 64-bit machines, install:

In order to use Bumblebee, it is necessary to add your regular user to the bumblebee group:

# gpasswd -a user bumblebee

Also enable bumblebeed.service. Reboot your system and follow #Usage.

Installing Bumblebee with Intel/Nouveau

Warning: This method is deprecated and will not work anymore. Use the nvidia module instead. If you want nouveau, use PRIME.

Install:

  • xf86-video-nouveau - experimental 3D acceleration driver.
  • mesa - Mesa classic DRI with Gallium3D drivers and 3D graphics libraries.
Note: If, when using primusrun on a system with the nouveau driver, you are getting:
primus: fatal: failed to load any of the libraries: /usr/$LIB/nvidia/libGL.so.1 
/usr/$LIB/nvidia/libGL.so.1: Cannot open shared object file: No such file or directory

You should add the following in /usr/bin/primus after PRIMUS_libGL:

export PRIMUS_libGLa='/usr/$LIB/libGL.so.1'
If you want, create a new script (for example primusnouveau).

Usage

Test

Install mesa-demos and use glxgears to test if if Bumblebee works with your Optimus system:

$ optirun glxgears -info

If it fails, try the following commands:

  • 64 bit system:
$ optirun glxspheres64
  • 32 bit system:
$ optirun glxspheres32

If the window with animation shows up, Optimus with Bumblebee is working.

Note: If glxgears failed, but glxspheresXX worked, always replace "glxgears" with "glxspheresXX" in all cases.

General usage

$ optirun [options] application [application-parameters]

For example, start Windows applications with Optimus:

$ optirun wine application.exe

For another example, open NVIDIA Settings panel with Optimus:

$ optirun -b none nvidia-settings -c :8
Note: A patched version of nvdock is available in the package nvdock-bumblebeeAUR

For a list of the options for optirun, view its manual page:

$ man optirun

Configuration

You can configure the behaviour of Bumblebee to fit your needs. Fine tuning like speed optimization, power management and other stuff can be configured in /etc/bumblebee/bumblebee.conf

Optimizing speed

Using VirtualGL as bridge

Bumblebee renders frames for your Optimus NVIDIA card in an invisible X Server with VirtualGL and transports them back to your visible X Server. Frames will be compressed before they are transported - this saves bandwidth and can be used for speed-up optimization of bumblebee:

To use another compression method for a single application:

$ optirun -c compress-method application

The method of compress will affect performance in the GPU/CPU usage. Compressed methods will mostly load the CPU. However, uncompressed methods will mostly load the GPU.

Compressed methods

  • jpeg
  • rgb
  • yuv

Uncompressed methods

  • proxy
  • xv

Here is a performance table tested with ASUS N550JV laptop and benchmark app unigine-heavenAUR:

Command FPS Score Min FPS Max FPS
optirun unigine-heaven 25.0 630 16.4 36.1
optirun -c jpeg unigine-heaven 24.2 610 9.5 36.8
optirun -c rgb unigine-heaven 25.1 632 16.6 35.5
optirun -c yuv unigine-heaven 24.9 626 16.5 35.8
optirun -c proxy unigine-heaven 25.0 629 16.0 36.1
optirun -c xv unigine-heaven 22.9 577 15.4 32.2
Note: Lag spikes occurred when jpeg compression method was used.

To use a standard compression for all applications, set the VGLTransport to compress-method in /etc/bumblebee/bumblebee.conf:

/etc/bumblebee/bumblebee.conf
[...]
[optirun]
VGLTransport=proxy
[...]

You can also play with the way VirtualGL reads back the pixels from your graphic card. Setting VGL_READBACK environment variable to pbo should increase the performance. Compare these two:

# PBO should be faster.
VGL_READBACK=pbo optirun glxgears
# The default value is sync.
VGL_READBACK=sync optirun glxgears
Note: CPU frequency scaling will affect directly on render performance

Primusrun

Note: Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended. See #Primus issues under compositing window managers.

primusrun (from package primus) is becoming the default choice, because it consumes less power and sometimes provides better performance than optirun/virtualgl. It may be run separately, but it does not accept options as optirun does. Setting primus as the bridge for optirun provides more flexibility.

For 32-bit applications support on 64-bit machines, install lib32-primus (multilib must be enabled).

Usage (run separately):

$ primusrun glxgears

Usage (as a bridge for optirun):

The default configuration sets virtualgl as the bridge. Override that on the command line:

$ optirun -b primus glxgears

Or, set Bridge=primus in /etc/bumblebee/bumblebee.conf and you won't have to specify it on the command line.

Tip: Refer to #Primusrun mouse delay/disable VSYNC if you want to disable VSYNC. It can also remove mouse input delay lag and slightly increase the performance.

Power management

The goal of the power management feature is to turn off the NVIDIA card when it is not used by Bumblebee any more. If bbswitch (or bbswitch-dkms) is installed, it will be detected automatically when the Bumblebee daemon starts. No additional configuration is necessary. However, bbswitch is for Optimus laptops only and will not work on desktop computers. So, Bumblebee power management is not available for desktop computers, and there is no reason to install bbswitch on a desktop. (Nevertheless, the other features of Bumblebee do work on some desktop computers.)

Default power state of NVIDIA card using bbswitch

The default behavior of bbswitch is to leave the card power state unchanged. bumblebeed does disable the card when started, so the following is only necessary if you use bbswitch without bumblebeed.

Set load_state and unload_state module options according to your needs (see bbswitch documentation).

/etc/modprobe.d/bbswitch.conf
options bbswitch load_state=0 unload_state=1

Enable NVIDIA card during shutdown

On some laptops, the NVIDIA card may not correctly initialize during boot if the card was powered off when the system was last shutdown. Therefore the Bumblebee daemon will power on the GPU when stopping the daemon (e.g. on shutdown) due to the (default) setting TurnCardOffAtExit=false in /etc/bumblebee/bumblebee.conf. Note that this setting does not influence power state while the daemon is running, so if all optirun or primusrun programs have exited, the GPU will still be powered off.

When you stop the daemon manually, you might want to keep the card powered off while still powering it on on shutdown. To achieve the latter, add the following systemd service (if using bbswitch):

/etc/systemd/system/nvidia-enable.service
[Unit]
Description=Enable NVIDIA card
DefaultDependencies=no

[Service]
Type=oneshot
ExecStart=/bin/sh -c 'echo ON > /proc/acpi/bbswitch'

[Install]
WantedBy=shutdown.target

Then enable the service by running systemctl enable nvidia-enable.service at the root prompt.

Enable NVIDIA card after waking from suspend

The bumblebee daemon may fail to activate the graphics card after suspending. A possible fix involves setting bbswitch as the default method for power management in /etc/bumblebee/bumblebee.conf:

/etc/bumblebee/bumblebee.conf
[driver-nvidia]
PMmethod=bbswitch

# ...

[driver-nouveau]
PMmethod=bbswitch
Note: This fix seems to work only after rebooting the system. Restarting the bumblebee service is not enough.

Multiple monitors

Outputs wired to the Intel chip

If the port (DisplayPort/HDMI/VGA) is wired to the Intel chip, you can set up multiple monitors with xorg.conf. Set them to use the Intel card, but Bumblebee can still use the NVIDIA card. One example configuration is below for two identical screens with 1080p resolution and using the HDMI out.

/etc/X11/xorg.conf
Section "Screen"
    Identifier     "Screen0"
    Device         "intelgpu0"
    Monitor        "Monitor0"
    DefaultDepth    24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1980x1080_60.00"
    EndSubSection
EndSection

Section "Screen"
    Identifier     "Screen1"
    Device         "intelgpu1"
    Monitor        "Monitor1"
    DefaultDepth   24
    Option         "TwinView" "0"
    SubSection "Display"
        Depth          24
        Modes          "1980x1080_60.00"
    EndSubSection
EndSection

Section "Monitor"
    Identifier     "Monitor0"
    Option         "Enable" "true"
EndSection

Section "Monitor"
    Identifier     "Monitor1"
    Option         "Enable" "true"
EndSection

Section "Device"
    Identifier     "intelgpu0"
    Driver         "intel"
    Option         "XvMC" "true"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier     "intelgpu1"
    Driver         "intel"
    Option         "XvMC" "true"
    Option         "UseEvents" "true"
    Option         "AccelMethod" "UXA"
    BusID          "PCI:0:2:0"
EndSection

Section "Device"
    Identifier "nvidiagpu1"
    Driver "nvidia"
    BusID "PCI:0:1:0"
EndSection

You need to probably change the BusID for both the Intel and the NVIDIA card.

$ lspci | grep VGA
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)

The BusID is 0:2:0

Output wired to the NVIDIA chip

On some notebooks, the digital Video Output (HDMI or DisplayPort) is hardwired to the NVIDIA chip. If you want to use all the displays on such a system simultaneously, you have to run 2 X Servers. The first will be using the Intel driver for the notebooks panel and a display connected on VGA. The second will be started through optirun on the NVIDIA card, and drives the digital display.

intel-virtual-output is a tool provided in the xf86-video-intel driver set, as of v2.99. When run in a terminal, it will daemonize itself unless the -f switch is used. Once the tool is running, it activates Bumblebee (Bumblebee can be left as default install), and any displays attached will be automatically detected, and manageable via any desktop display manager such as xrandr or KDE Display. See the Bumblebee wiki page for more information.

Note: In /etc/bumblebee/xorg.conf.nvidia change the lines UseEDID and Option "AutoAddDevices" "false" to "true", if you are having trouble with device resolution detection. You will also need to comment out the line Option "UseDisplayDevices" "none" in order to use the display connected to the NVIDIA GPU.

Commandline usage is as follows:

intel-virtual-output [OPTION]... [TARGET_DISPLAY]...
 -d <source display>  source display
 -f                   keep in foreground (do not detach from console and daemonize)
 -b                   start bumblebee
 -a                   connect to all local displays (e.g. :1, :2, etc)
 -S                   disable use of a singleton and launch a fresh intel-virtual-output process
 -v                   all verbose output, implies -f
 -V <category>        specific verbose output, implies -f
 -h                   this help

If no target displays are parsed on the commandline, intel-virtual-output will attempt to connect to any local display and then start bumblebee.[1]

The advantage of using intel-virtual-output in foreground mode is that once the external display is disconnected, intel-virtual-output can then be killed and bumblebee will disable the nvidia chip. Games can be run on the external screen by first exporting the display export DISPLAY=:8, and then running the game with optirun game_bin, however, cursor and keyboard are not fully captured. Use export DISPLAY=:0 to revert back to standard operation.

Switch between discrete and integrated like Windows

In Windows, the way that Optimus works is NVIDIA has a whitelist of applications that require Optimus for, and you can add applications to this whitelist as needed. When you launch the application, it automatically decides which card to use.

To mimic this behavior in Linux, you can use libgl-switcheroo-gitAUR[broken link: archived in aur-mirror]. After installing, you can add the below in your .xprofile.

~/.xprofile
mkdir -p /tmp/libgl-switcheroo-$USER/fs
gtkglswitch &
libgl-switcheroo /tmp/libgl-switcheroo-$USER/fs &

To enable this, you must add the below to the shell that you intend to launch applications from (I simply added it to the .xprofile file)

export LD_LIBRARY_PATH=/tmp/libgl-switcheroo-$USER/fs/\$LIB${LD_LIBRARY_PATH+:}$LD_LIBRARY_PATH

Once this has all been done, every application you launch from this shell will pop up a GTK+ window asking which card you want to run it with (you can also add an application to the whitelist in the configuration). The configuration is located in $XDG_CONFIG_HOME/libgl-switcheroo.conf, usually ~/.config/libgl-switcheroo.conf

Note: This tool acts by making a FUSE file system and then adding it into the dynamic library searching path, which may lead to slow speed or even segmentation faults when launching a software.

CUDA without Bumblebee

You can use CUDA without bumblebee. All you need to do is ensure that the nvidia card is on:

 # tee /proc/acpi/bbswitch <<< ON

Now when you start a CUDA application it is going to automatically load all the necessary modules.

To turn off the nvidia card after using CUDA do:

 # rmmod nvidia_uvm
 # rmmod nvidia
 # tee /proc/acpi/bbswitch <<< OFF

Troubleshooting

Note: Please report bugs at Bumblebee-Project's GitHub tracker as described in its wiki.

[VGL] ERROR: Could not open display :8

There is a known problem with some wine applications that fork and kill the parent process without keeping track of it (for example the free to play online game "Runes of Magic")

This is a known problem with VirtualGL. As of bumblebee 3.1, so long as you have it installed, you can use Primus as your render bridge:

$ optirun -b primus wine windows program.exe

If this does not work, an alternative walkaround for this problem is:

$ optirun bash
$ optirun wine windows program.exe

If using NVIDIA drivers a fix for this problem is to edit /etc/bumblebee/xorg.conf.nvidia and change Option ConnectedMonitor to CRT-0.

Xlib: extension "GLX" missing on display ":0.0"

If you tried to install the NVIDIA driver from NVIDIA website, this is not going to work.

1. Uninstall that driver in the similar way:

# ./NVIDIA-Linux-*.run --uninstall

2. Remove generated by NVIDIA Xorg configuration file:

# rm /etc/X11/xorg.conf

3. (Re)install the correct NVIDIA driver: #Installing Bumblebee with Intel/NVIDIA

[ERROR]Cannot access secondary GPU: No devices detected

In some instances, running optirun will return:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) No devices detected.
[ERROR]Aborting because fallback start is disabled.

In this case, you will need to move the file /etc/X11/xorg.conf.d/20-intel.conf to somewhere else, restart the bumblebeed daemon and it should work. If you do need to change some features for the Intel module, a workaround is to merge /etc/X11/xorg.conf.d/20-intel.conf to /etc/X11/xorg.conf.

It could be also necessary to comment the driver line in /etc/X11/xorg.conf.d/10-monitor.conf.

If you're using the nouveau driver you could try switching to the nvidia driver.

You might need to define the NVIDIA card somewhere (e.g. file /etc/X11/xorg.conf.d), using the correct BusID according to lspci output:

Section "Device"
    Identifier "nvidiagpu1"
    Driver "nvidia"
    BusID "PCI:0:1:0"
EndSection

Observe that the format of lspci output is in HEX, while in xorg it is in decimals. So if the output of lspci is, for example, 0a:00.0 the BusID should be PCI:10:0:0.


NVIDIA(0): Failed to assign any connected display devices to X screen 0

If the console output is:

[ERROR]Cannot access secondary GPU - error: [XORG] (EE) NVIDIA(0): Failed to assign any connected display devices to X screen 0
[ERROR]Aborting because fallback start is disabled.

You can change this line in /etc/bumblebee/xorg.conf.nvidia:

Option "ConnectedMonitor" "DFP"

to:

Option "ConnectedMonitor" "CRT"

Failed to initialize the NVIDIA GPU at PCI:1:0:0 (GPU fallen off the bus / RmInitAdapter failed!)

Add rcutree.rcu_idle_gp_delay=1 to the kernel parameters of the bootloader configuration (see also the original BBS post for a configuration example).

Could not load GPU driver

If the console output is:

[ERROR]Cannot access secondary GPU - error: Could not load GPU driver

and if you try to load the nvidia module you get:

modprobe nvidia
modprobe: ERROR: could not insert 'nvidia': Exec format error

This could be because the nvidia driver is out of sync with the Linux kernel, for example if you installed the latest nvidia driver and haven't updated the kernel in a while. A full system update might resolve the issue. If the problem persists you should try manually compiling the nvidia packages against your current kernel, for example with nvidia-dkms or by compiling nvidia from the ABS.

NOUVEAU(0): [drm] failed to set drm interface version

Consider switching to the official nvidia driver. As commented here, nouveau driver has some issues with some cards and bumblebee.

/dev/dri/card0: failed to set DRM interface version 1.4: Permission denied

This could be worked around by appending following lines in /etc/bumblebee/xorg.conf.nvidia (see here):

Section "Screen"
    Identifier "Default Screen"
    Device "DiscreteNvidia"
EndSection

ERROR: ld.so: object 'libdlfaker.so' from LD_PRELOAD cannot be preloaded: ignored

You probably want to start a 32-bit application with bumblebee on a 64-bit system. See the "For 32-bit..." section in #Installation. If the problem persists or if it is a 64-bit application, try using the primus bridge.

Fatal IO error 11 (Resource temporarily unavailable) on X server

Change KeepUnusedXServer in /etc/bumblebee/bumblebee.conf from false to true. Your program forks into background and bumblebee don't know anything about it.

Video tearing

Video tearing is a somewhat common problem on Bumblebee. To fix it, you need to enable vsync. It should be enabled by default on the Intel card, but verify that from Xorg logs. To check whether or not it is enabled for NVIDIA, run:

$ optirun nvidia-settings -c :8

X Server XVideo Settings -> Sync to VBlank and OpenGL Settings -> Sync to VBlank should both be enabled. The Intel card has in general less tearing, so use it for video playback. Especially use VA-API for video decoding (e.g. mplayer-vaapi and with -vsync parameter).

Refer to the Intel[broken link: invalid section] article on how to fix tearing on the Intel card.

If it is still not fixed, try to disable compositing from your desktop environment. Try also disabling triple buffering.

Bumblebee cannot connect to socket

You might get something like:

$ optirun glxspheres64

or (for 32 bit):

$ optirun glxspheres32
[ 1648.179533] [ERROR]You've no permission to communicate with the Bumblebee daemon. Try adding yourself to the 'bumblebee' group
[ 1648.179628] [ERROR]Could not connect to bumblebee daemon - is it running?

If you are already in the bumblebee group ($ groups | grep bumblebee), you may try removing the socket /var/run/bumblebeed.socket.

Another reason for this error could be that you haven't actually turned on both gpu's in your bios, and as a result, the Bumblebee daemon is in fact not running. Check the bios settings carefully and be sure intel graphics (integrated graphics - may be abbreviated in bios as something like igfx) has been enabled or set to auto, and that it's the primary gpu. Your display should be connected to the onboard integrated graphics, not the discrete graphics card.

If you mistakenly had the display connected to the discrete graphics card and intel graphics was disabled, you probably installed Bumblebee after first trying to run Nvidia alone. In this case, be sure to remove the /etc/X11/xorg.conf or .../20-nvidia... configuration files. If Xorg is instructed to use Nvidia in a conf file, X will fail.

Running X.org from console after login (rootless X.org)

See Xorg#Rootless Xorg (v1.16).

Primusrun mouse delay/disable VSYNC

For primusrun, VSYNC is enabled by default and as a result, it could make mouse input delay lag or even slightly decrease performance. Test primusrun with VSYNC disabled:

$ vblank_mode=0 primusrun glxgears

Tango-edit-clear.pngThis article or section needs language, wiki syntax or style improvements.Tango-edit-clear.png

Reason: Useless package, equivalent to alias optiprime="vblank_mode=0 primusrun". You can even use alias primusrun="vblank_mode=0 primusrun". (Discuss in Talk:Bumblebee#)

If you want to keep using it, install optiprimeAUR package, which provides a script for above command. Usage:

$ optiprime glxgears

Comparison:

Command FPS Score Min FPS Max FPS
optiprime unigine-heaven 31.5 793 22.3 54.8
primusrun unigine-heaven 31.4 792 18.7 54.2

Tested with ASUS N550JV laptop and benchmark app unigine-heavenAUR.

Note: To disable vertical synchronization system-wide, see Intel graphics#Disable Vertical Synchronization (VSYNC).

Primus issues under compositing window managers

Since compositing hurts performance, invoking primus when a compositing WM is active is not recommended.[2] If you need to use primus with compositing and see flickering or bad performance, synchronizing primus' display thread with the application's rendering thread may help:

$ PRIMUS_SYNC=1 primusrun ...

This makes primus display the previously rendered frame.

Problems with bumblebee after resuming from standby

In some systems, it can happens that the nvidia module is loaded after resuming from standby. The solution for this, is to install the acpi_call and acpi package.

Optirun doesn't work, no debug output

Users are reporting that in some cases, even though Bumblebee was installed correctly, running

$ optirun glxgears -info

gives no output at all, and the glxgears window does not appear. Any programs that need 3d acceleration crashes:

$ optirun bash
$ glxgears
Segmentation fault (core dumped)

Apparently it is a bug of some versions of virtualgl. So a workaround is to install primus and lib32-primus and use it instead:

$ primusrun glxspheres64
$ optirun -b primus glxspheres64

By default primus locks the framerate to the vrate of your monitor (usually 60 fps), if needed it can be unlocked by passing the vblank_mode=0 environment variable.

$ vblank_mode=0 primusrun glxspheres64

Usually there is no need to display more frames han your monitor can handle, but you might want to for benchmarking or to have faster reactions in games (e.g., if a game need 3 frames to react to a mouse movement with vblank_mode=0 the reaction will be as quick as your system can handle, without it will always need 1/20 of second).

You might want to edit /etc/bumblebee/bumblebee.conf to use the primus render as default. If after an update you want to check if the bug has been fixed just use optirun -b virtualgl.

See this forum post for more information.

Broken power management with kernel 4.8

If you have a newer laptop (BIOS date 2015 or newer), then Linux 4.8 might break bbswitch (bbswitch issue 140) since bbswitch does not support the newer, recommended power management method. As a result, the dGPU may fail to power on, fail to power off or worse.

As a workaround, add pcie_port_pm=off to your Kernel parameters.

Alternatively, if you are only interested in power saving (and perhaps use of external monitors), remove bbswitch and rely on Nouveau runtime power-management (which supports the new method).

Lockup issue (lspci hangs)

See NVIDIA_Optimus#Lockup_issue_.28lspci_hangs.29] for an issue that affects new laptops with a GTX 965M (or alike).

See also

Join us at #bumblebee at freenode.net.