Bumblebee (Italiano)

From ArchWiki
Revision as of 18:34, 7 September 2011 by Morbin (talk | contribs) (Creata pagina in italiano)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.


Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어


External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

Tango-preferences-desktop-locale.pngThis article or section needs to be translated.Tango-preferences-desktop-locale.png

Notes: please use the first argument of the template to provide more detailed indications. (Discuss in Talk:Bumblebee (Italiano)#)
Nota: Questo articolo è in fase di traduzione. Seguite per ora le istruzioni della versione inglese.
Warning: Bumblebee is a work in progress and may not work properly on your machine
Note: Please report bugs at Bumblebee-Project's GitHub tracker as described in it's Wiki.

Bumblebee is a solution to Nvidia Optimus hybrid-graphics technology allowing to use the dedicated graphics card for rendering. It was started by Martin Juhl

About Bumblebee

Optimus Technology is an hybrid graphics implementation without a hardware multiplexer. The integrated GPU manages the display while the dedicated GPU manages the most demanding rendering and ships the work to the integrated GPU to be displayed. When the laptop is running on battery supply, the dedicated GPU is turned off to save power and longer the battery life.

Bumblebee is a software implementation based on VirtualGL and a kernel driver to be able to use the dedicated GPU, which is not physically connected to the screen.

How it works

Bumblebee mimics the Optimus technology behaviour; using the dedicated GPU for rendering when needed and power it down when not in use. The present releases only support rendering on-demand, power-management is a work in progress.

The Nvidia dedicated card is managed as a separate X server conected to a "fake" screen (the screen is configured but not used). The second server is called using VirtualGL as it where a remote server. That said, you will need a series of steps to set-up the kernel driver, the X server and a daemon.

Using Nvidia driver

Tango-document-new.pngThis article is a stub.Tango-document-new.png

Notes: please use the first argument of the template to provide more detailed indications. (Discuss in Talk:Bumblebee (Italiano)#)

Versions >= 2.3 works with a daemon to start/stop the X server but won't control the card's power (see #Power Management below)

Installation

AUR package: bumblebee

Note: If you installed bumblebee from GitHub repository using the installer, please run the uninstaller before installing the AUR package

To render 32bit applications with VirtualGL you will need virtualgl32 and lib32-nvidia-utils-bumblebee packages.

Setup

In order to make Bumblebee functional you will need to configure a second X server, load the nvidia kernel module and run the bumblebee daemon. Most of these steps are made automatic at installation time.

Load Kernel Module

In order to run Bumblebee using the proprietary Nvidia kernel module, you need to unload the Nouveau kernel module first. To do so run in a terminal:

# rmmod nouveau

Add the following line to Template:Filename to disable Nouveau module at boot: Template:File

Now load the Nvidia module running this:

# modprobe nvidia

To check for success of loading the kernel module, check the output of this command:

$ lsmod | grep nvidia

If you want nvidia module to be loaded at boot, add the Nvidia module in the "MODULES" array in your Template:Filename:

MODULES=(... nvidia ...)

Setup X Server

The installation should take care of recognize your graphics card and it's PCI BusID. If you notice a warning regarding this during installation follow this steps.

After installation a Template:Filename file is created with the minimal device configuration. In this file you must specify the PCI bus address of the Nvidia card. To get it run in a terminal:

$ lspci -d10de: -nn | grep '030[02]'

This will give you something like this:

01:00.0 VGA compatible controller [0300]: nVidia Corporation GT218 [GeForce 310M] [10de:0a75] (rev a2)

Take note of the PCI address of the nVidia card (01:00.0 in this case) and edit the line in Template:Filename with the option "BusID" under the "Device" section: Template:File

Note: You must replace any dots (.) by a colon (:) for X server to understand the BusID
Note: If you experience problems with Bumblebee X server not recognizing the screen, try setting the option ConnectedMonitor to "CRT-0" or "DFP-0"

Then look for the "Files" section and check the path to the nvidia xorg module is correct: Template:File

Giving permission to use Bumblebee

Permission to use 'optirun' is granted to all members of the 'bumblebee' group, so you must add yourself (and other users whiling to use bumblebee) to that group:

# usermod -a -G bumblebee <user>

where <user> is the login name of the user to be added. Then log off and on again to apply the group changes.

Start Bumblebee Daemon

Bumblebee provides a daemon to start the second X server, to start it simply run:

# rc.d start bumblebee

To be started at boot add it to your 'DAEMONS' array in Template:Filename

# DAEMONS=(... @bumblebee)

Test Bumblebee

You can test Bumblebee by comparing the output of these two commands:

$ glxgears

And:

$ optirun glxgears
Note: You will need the 'mesa-demos' package to run glxgears. This is not a benchmarking test, only indicates that the dedicated GPU is rendering.

Configuration

You may configure some variables in file Template:Filename. The defaults are:

Template:File

Compression and VGL Transport

Compression and transport regards how the frames are compressed in the server side (bumblebee X server), then transported to the client side (main X server) and uncompressed to be displayed in the application window. It mostly will affect performance in the GPU/GPU usage, as the transport is unlimited in bandwidth. Compressed methods (such as jpeg) will load the CPU the most but will load GPU the minimum necessary; uncompressed methods loads the most on GPU and the CPU will have the minimum load possible.

You can try different compression methods adding the '-c <compress-method>' to the 'optirun' command and test which suits you best:

$ optirun -c jpeg glxgears
$ optirun -c proxy glxgears
$ optirun -c rgb glxgears
$ optirun -c yuv glxgears

And you can replace the one you like in 'VGL_COMPRESS' variable in Template:Filename to use it as default.

Note: Uncompressed methods proxy and xv show less fps in glxgears but they perform better in some applications

Server Behavior Configuration

There are three variables to control how the server should behave when 'optirun' is called

STOP_SERVICE_ON_EXIT
X_SERVER_TIMEOUT
FALLBACK_START

The X server will always delay a bit when first called. Then should start in a second or so, if the time of subsequent calls is too high you may set STOP_SERVICE_ON_EXIT to 'N' and the Bumblebee X server will not be stopped when the last 'optirun' is disconnected.

X_SERVER_TIMEOUT controls how much time the daemon will wait for the X server to be ready. If your X server takes a while to start you may want to increase this setting, otherwise the server start might fail.

If you want the application to start in the integrated GPU if the X server is not available, set FALLBACK_START to 'Y'. This will print the same message when the server is not available but will not fail the program run.

Usage

To launch an application using the dedicated graphics card:

$ optirun [options] <application> [application-parameters]

For a list of options for 'optirun' run in a terminal:

$ optirun --help

If you want to run a 32-bit application on a 64-bit system you may install the proper 'lib32' packages.

Power Management

Tango-document-new.pngThis article is a stub.Tango-document-new.png

Notes: please use the first argument of the template to provide more detailed indications. (Discuss in Talk:Bumblebee (Italiano)#)
Note: This feature has been dropped until a safe and complete solution is found. However a framework to enable the prior ACPI methods is being included in future versions

The goal of power management is to turn the discrete card off when it's not used by any application, and turn it back on when it's needed. Currently the card can be used on-demand and no automatic switching is supported.

A little note on why

Power management has temporary been removed because Bumblebee did the wrong ACPI calls in order to turn the card off and on. This had some side-effects:

  • "FATAL: Error inserting nvidia (.../nvidia.ko): No such device" errors on loading the nvidia module
  • Hangs/freezes during booting, shutdown or suspend
  • BIOS settings which appears to be modified
  • Other operating systems not recognizing the graphics card anymore

In a future release, Power Managament might be added back again after some research.

As one of the developers said about the present state of power management under bumblebee:

Lekensteyn

Okay, let's assume a building with dirty windows. You'd like to see more sun and therefore asks a worker to hire someone to get that job done. The worker has never learnt how to clean a dirty window properly, but guess that a rock might be the right way to do it. He asks a kid to try cleaning the window with a rock. Now, different things may happen:

  1. the window gets scratched and it even gets more darky
  2. the window breaks and the sun can shine

In the first case, it has gotten worse. That's the "Module not found" horror and suspend/lock-up issues. In the second case, you won't guess that something is wrong because you achieved your goal: the sunshine is better. Anyway, the right solution is obviously cleaning the window with a cloth but not before the window is repaired, which is our current task.

The ACPI call methods are rocks and "you" is you. The kid is just a messenger, the acpi_call module. That worker is the Bumblebee developer team.

Troubleshooting

Please report bugs here.

VirtualGL can't open display

If you recieve a message like this

[VGL] ERROR: Could not open display :XX

means the second X server is not running or failed to start. To troubleshoot this you may look Template:Filename file, where the 'XX' is the number of the display used by bumblebee, also take a look for messages under Template:Filename.

Here are some things you can try and check:

  • Check the kernel module loaded for the nvidia card with "lspci -k"
  • Check the file Template:Filename and make sure the option "BusID" points to the correct PCI port.
  • Change the "ConnectedMonitor" option to "DFP-0" or "CRT-0" (or "DFP,CRT"). This must be a valid screen on your laptop different of "LVDS"
  • Try setting the option X_SERVER_TIMEOUT to higher value if the X server starts but takes a long time to become available.

Using Nouveau driver

Tango-document-new.pngThis article is a stub.Tango-document-new.png

Notes: please use the first argument of the template to provide more detailed indications. (Discuss in Talk:Bumblebee (Italiano)#)

Work in progress. Will come with new releases by default.

See also