Difference between revisions of "Bumblebee"

From ArchWiki
Jump to: navigation, search
m (See discussion)
m (glxgears no longer part of mesa, but mesa-demo)
Line 100: Line 100:
 
  $ optirun glxgears
 
  $ optirun glxgears
  
{{Note|You will need the 'mesa' package to run glxgears}}
+
{{Note|You will need the 'mesa-demo' package to run glxgears}}
  
 
== Usage ==
 
== Usage ==

Revision as of 03:20, 14 July 2011

Tango-document-new.pngThis article is a stub.Tango-document-new.png

Notes: please use the first argument of the template to provide more detailed indications. (Discuss in Talk:Bumblebee#)

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.


Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어


External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

Warning: Bumblebee is a work in progess and may not work properly on your machine
Note: Please report bugs at Martin Juhl's GitHub tracker as described in it's README.

Bumblebee is a solution to Nvidia Optimus hybrid-graphics technology allowing to use the dedicated graphics card for rendering.

About Bumblebee

Optimus Technology is an hybrid graphics implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and longer the battery life.

Bumblebee is a software implementation based on VirtualGL and the Nvidia proprietary kernel module to be able to use the dedicated GPU, which is not physically connected to the screen.

This article will describe how to configure Bumblebee.

Installation

Note: Before install make sure you uninstall any previous version of Bumblebee. It's not necessary but it's safer that way.

AUR package: bumblebee This package depends on other packages from the AUR:

  • dkms-nvidia compiles the Nvidia dynamic kernel module.
  • nvidia-utils-bumblebee contains the graphics libraries from Nvidia installed in a different directory to be used simultaneously with libgl.
  • VirtualGL sends commands to a server-side 3D graphics card.

Setup

In order to make Bumblebee functional you will need to configure a second X server, load the nvidia kernel module and run the bumblebee daemon.

This article section describes the use of proprietary Nvidia module. Nouveau module can be used but it's not tested as 3D acceleration it's still not supported. You can test it by changing the driver of the X server and loading the appropriate libGL in the optirun script.

Load Kernel Module

Bumblebee needs the proprietary Nvidia kernel module to run properly. You need to unload the Nouveau kernel module first. To do so run in a terminal:

# rmmod nouveau

Add the following line to Template:Filename to disable Nouveau module at boot: Template:File

Now load the Nvidia module running this:

# modprobe nvidia

To check for success of loading the kernel module, check the output of this command:

$ lspci -k | grep nvidia

It should be something like this:

Kernel driver in use: nvidia
Note: If you experience some trouble in loading the nvidia kernel module, try adding this line to Template:Filename: Template:File

Setup X Server

After installation a Template:Filename file is created with the minimal device configuration. Please check for this file with

$ ls /etc/X11

If you see a file called a Template:Filename please copy it to Template:Filename with this command

# cp /etc/X11/xorg.conf.nvidia.pacnew /etc/X11/xorg.conf.nvidia

In this file you must specify the PCI bus address of the Nvidia card. To get it run in a terminal:

$ lspci | grep VGA

This will give you something like this:

00:02.0 VGA compatible controller: Intel Corporation Core Processor Integrated Graphics Controller (rev 18)
01:00.0 VGA compatible controller: nVidia Corporation GT218 [GeForce 310M] (rev a2)

Take note of the PCI address of the nVidia card (01:00.0 in this case) and edit the line in Template:Filename with the option "BusID" under the "Device" section: Template:File

Note: You must replace any dots (.) by a colon (:) for X server to understand the BusID
Note: If you experience problems with Bumblebee X server not recognizing the screen, try setting the option ConnectedMonitor to "CRT-0" or "DFP-0"

Start Bumblebee Daemon

Bumblebee provides a daemon to start the second X server, to start it simply run:

# rc.d start bumblebee

Start VirtualGL Client

Note: This step could be unnecessary. Please try to skip this the first time you test bumblebee

Run this command in a terminal:

$ vglclient -gl

This command should be run as normal user after X server starts. There are some cases where only the properly set of 'DISPLAY' environment variable is enough to run vglrun command.

Test Bumblebee

You can test Bumblebee by comparing the output of these two commands:

$ glxgears

And:

$ optirun glxgears
Note: You will need the 'mesa-demo' package to run glxgears

Usage

To launch an application using the dedicated graphics card:

$ optirun <application> [application-parameters]

If you want to run a 32-bit application on a 64-bit system you may use instead:

$ optirun32 <application> [application-parameters]

Configuration

You may configure VGL variables in file Template:Filename. The defaults are:

Template:File

You can try different compression methods adding the '-c <compress-method>' to the vglrun command and test which suits you best:

$ vglrun -ld /usr/lib/nvidia-current -c jpeg glxgears
$ vglrun -ld /usr/lib/nvidia-current -c proxy glxgears
$ vglrun -ld /usr/lib/nvidia-current -c rgb glxgears
$ vglrun -ld /usr/lib/nvidia-current -c yuv glxgears

And you can replace the one you like in VGL_COMPRESS environment variable in Template:Filename.

Note: Uncompressed methods proxy, xv and rgb show less fps in glxgears but they perform better in some applications

Autostart Bumblebee

If you want bumblebee to start at boot, do the following. Add the Nvidia module in the "MODULES" array in your Template:Filename:

MODULES=(... nvidia ...)

Add "bumblebee" to the "DAEMONS" array in your Template:Filename:

DAEMONS=( ... dbus bumblebee ...)
Note: It's safe to background bumblebee but it's recommended to be run after dbus


Nvidia Card ON/OFF Scripts

Warning: This feature is highly experimental and can lock system on suspend/hibernation/shutdown if the discrete card is not turned back ON before suspend/hibernate/shutdown
Note: This scripts need the acpi_call kernel module to work.

First unload the Nvidia kernel module. This is necessary to avoid system freeze after turning off the discrete card. If you receive an error of the Nvidia module to be in use, then you must stop Bumblebee daemon before unload the module:

# rc.d stop bumblebee
# modprobe -r nvidia

Now open a terminal and run this command to check your battery rate (default update interval is 2 seconds):

$ watch grep rate /proc/acpi/battery/BAT0/state

You can manually turn on/off dedicated card by calling methods of acpi_call module. First you must test for a method call to turn off the discrete card:

$ /usr/share/acpi_call/test_off.sh 

Should output something like this:

Trying \_SB.PCI0.P0P1.VGA._OFF: failed
Trying \_SB.PCI0.P0P2.VGA._OFF: failed
Trying \_SB_.PCI0.OVGA.ATPX: failed
Trying \_SB_.PCI0.OVGA.XTPX: failed
Trying \_SB.PCI0.P0P3.PEGP._OFF: failed
Trying \_SB.PCI0.P0P2.PEGP._OFF: failed
Trying \_SB.PCI0.P0P1.PEGP._OFF: works!

The method that "works!" is what you need to turn off your discrete graphics card by this command:

# echo "\_SB.PCI0.P0P1.PEGP._OFF" > /proc/acpi/call
Warning: This may cause some unexpected issues and in some cases the power consumption increases as fans run at full speed. Reboot may fix this problem most of the times

You should notice a decrease on power usage of about 200 ~ 300 mA. To turn back on the discrete card you must call a similar method but with OFF replaced by ON:

# echo "\_SB.PCI0.P0P1.PEGP._ON" > /proc/acpi/call

Then you can reload the Nvidia kernel module:

# modprobe nvidia
Warning: In some cases the switch back on of the discrete card IS NOT trivial (more complicated methods must be called). If your output of modprobe nvidia throws an error then your case is non-trivial. Normally the card will be back on after system reboot

If you could complete this steps then you would be able to use power management in Bumblebee.

Note: These scripts are available under /usr/share/bumblebee/examples directory for some laptops known to work, the implementation is up to you for the moment. It may happen that some scripts meant to work with some laptop model may not work for you even with the same laptop model

See also