NVIDIA

From ArchWiki
Revision as of 09:00, 27 June 2010 by D garbage (Talk | contribs) (Section 'Installing' - added note about legacy drivers and xorg 1.8 (copied from beginners guide))

Jump to: navigation, search

This template has only maintenance purposes. For linking to local translations please use interlanguage links, see Help:i18n#Interlanguage links.


Local languages: Català – Dansk – English – Español – Esperanto – Hrvatski – Indonesia – Italiano – Lietuviškai – Magyar – Nederlands – Norsk Bokmål – Polski – Português – Slovenský – Česky – Ελληνικά – Български – Русский – Српски – Українська – עברית – العربية – ไทย – 日本語 – 正體中文 – 简体中文 – 한국어


External languages (all articles in these languages should be moved to the external wiki): Deutsch – Français – Română – Suomi – Svenska – Tiếng Việt – Türkçe – فارسی

Summary help replacing me
Information on installing, configuring and troubleshooting the proprietary NVIDIA Drivers.
Related
ATI Catalyst
ATI
Intel
Xorg

This article covers installing and configuring NVIDIA's proprietary graphic card driver. For information about the open-source drivers, see Nouveau.

Installing

These instructions are for those using the stock kernel26 package. For custom kernel setup, skip to the next subsection.

Tip: It is usually beneficial to install the NVIDIA driver through pacman rather than through the package provided by the NVIDIA site, this allows the driver to be updated when upgrading the system.

1. Visit NVIDIA's driver download site to find out the appropiate driver for a given card.

2. Install the driver for newer cards:

# pacman -S nvidia nvidia-utils
 
Whereas users with older cards should install:
# pacman -S nvidia-96xx nvidia-96xx-utils
or:
# pacman -S nvidia-173xx nvidia-173xx-utils
Note: Currently Nvidia-173xx and -96xx drivers do not support Xorg release 1.8. Please see the news post here: [1].
Note: For the latest card models, it may be required to install Template:Package AUR and Template:Package AUR from the AUR since the stable drivers may not support the newly introduced features.
Note: On 64 bit systems, For 32-bit programs to take advantage of nvidia-utils you must also install the equivalent lib32 package (for example lib32-nvidia-utils).

Once the driver has been installed, continue to: #Configuring.

Alternate install: custom kernel

First of all, it is of advantage to know how the ABS system works by reading some of the other articles about it:

The following is a short tutorial for making a custom nvidia driver package using ABS:

Install ABS and generate the tree:

# pacman -S abs
# abs

As a standard user, make a temporary directory for creating the new package:

$ mkdir -p ~/devel/abs

Make a copy of the nvidia package directory:

$ cp -r /var/abs/extra/nvidia/ ~/devel/abs/

Go into the temporary nvidia build directory:

$ cd ~/devel/abs/nvidia

It is required to edit the files Template:Filename and Template:Filename file so that they contain the right kernel version variables.

While running the custom kernel, get the appropiate kernel and local version names:

$ uname -r
  1. In nvidia.install, replace the KERNEL_VERSION="2.6.xx-ARCH" variable with the custom kernel version, such as KERNEL_VERSION="2.6.22.6" or KERNEL_VERSION"2.6.22.6-custom" depending on what the kernel's version is and the local version's text/numbers. Do this for all instances of the version number within this file.
  2. In PKGBUILD, change the _kernver='2.6.xx-ARCH' variable to match the appropiate version, as above.
  3. If there are more than one kernels in the system installed in parallel, (such as a custom kernel alongside the default -ARCH kernel) change the "pkgname=nvidia" variable in the PKGBUILD to a unique identifier, such as nvidia-2622 or nvidia-custom. This will allow both kernels to use the nvidia module, since the custom nvidia module has a different package name and will not overwrite the original.

Then do:

$ makepkg -ci

The Template:Codeline operand tells makepkg to clean left over files after building the nvidia driver, whereas Template:Codeline specifies that makepkg should automatically run pacman to install the resulting package.

Configuring

It is possible that after installing the driver it may not be needed to create an Xorg server configuration file. You can run test to see if the Xorg server will function correctly without a configuration file. However, it may be required to create a Template:Filename configuration file in order to adjust various settings. This configuration can be generated by the NVIDIA Xorg configuration tool, or it can be created manually. If created manually, it can be a minimal configuration (in the sense that it will only pass the basic options to the Xorg server), or it can include a number of settings that can bypass Xorg's auto-discovered or pre-configured options.

Automatic configuration

The NVIDIA package includes an automatic configuration tool to create an Xorg server configuration file (Template:Filename) and can be run by:

nvidia-xconfig

This command will auto-detect and create (or edit, if already present) the Template:Filename configuration according to present hardware.

Warning: That may still not work properly with Xorg-server 1.8

Automatic configuration with multiple monitors

The NVIDIA package provides Twinview. This tool will help by automatically configuring all the monitors connected to your video card. This only works for multiple monitors on a single card. To configure Xorg Server with Twinview run:

nvidia-xconfig --twinview

Minimal configuration

To create a basic Template:Filename, as root:

touch /etc/X11/xorg.conf

And add the driver:

Section "Device"
   Identifier     "Device0"
   Driver         "nvidia"
   VendorName     "NVIDIA Corporation"
EndSection
Tip: Make sure, in order to have full multimedia functionality, to have xorg-input-drivers installed.

Tweaking

GUI: nvidia-settings

The NVIDIA package includes the Template:Codeline program that allows adjustment of several additional settings.

For the settings to be loaded on login, run this command from the terminal:

$ nvidia-settings --load-config-only

Or add it to the the desktop environment's auto-startup method.

Tip: On rare occasions the Template:Filename may become corrupt. If this happens, the Xorg server may crash and the file will have to be deleted to fix the issue.

Advanced: xorg.conf

Edit Template:Filename, and add the option to the correct section. NVIDIA tests and ships the drivers with the recommended setting so note that some edits may cause instability, tearing, among other problems. Since not all options may work for a given system, consider backing up Template:Filename before making any edits. The Xorg server will need to be restarted before any changes are applied.

Enabling desktop composition

As of NVIDIA driver version 180.44, support for GLX with the Damage and Composite X extensions is enabled by default. Refer to Composite for detailed instructions.

Disabling the logo on startup

Add the Template:Codeline option under section Template:Codeline:

Option "NoLogo" "True"

Enabling hardware acceleration

Add the Template:Codeline option under section Template:Codeline:

Option "RenderAccel" "True"
Note: RenderAccel is enabled by default since drivers version 97.46.xx

Overriding monitor detection

The Template:Codeline option under section Template:Codeline allows to override monitor detection when X server starts, which may save a significant amount of time at start up. The available options are: Template:Codeline for analog connections, Template:Codeline for digital monitors and Template:Codeline for televisions.

The following statement forces the NVIDIA driver to bypass startup checks and recognize the monitor as DFP:

Option "ConnectedMonitor" "DFP"
Note: Use "CRT" for all analog 15 pin VGA connections, even if the disply is a flat panel. "DFP" is intended for DVI digital connections only.

Enabling triple buffering

Enable the use of triple buffering by adding the Template:Codeline Option under section Template:Codeline:

Option "TripleBuffer" "True"

Use this option if the graphics card has plenty of ram (equal or greater than 128MB). The setting only takes effect when syncing to vblank is enabled, one of the options featured in nvidia-settings.

Note: This option may introduce full-screen tearing

Enabling backing store

This option is used to enable the server's support for backing store, a mechanism by which pixel data for occluded window regions is remembered by the server, thereby alleviating the need to send expose events to X clients when the data needs to be redisplayed. BackingStore is not bound to NVIDIA drivers but to X server itself. ATI users would benefit from this option as well.

Add under section Template:Codeline:

Option "BackingStore" "True"

Using OS-level events

Taken from the NVIDIA driver's README file: "[...] Use OS-level events to efficiently notify X when a client has performed direct rendering to a window that needs to be composited." It may help improving performance, but it is currently incompatible with SLI and Multi-GPU modes.

Add under section Template:Codeline:

Option "DamageEvents" "True"
Note: This option is enabled by default in newer driver versions.

Enabling power saving

Add under section Monitor:

Option "DPMS" "True"

Forcing Powermizer performance level (for laptops)

Add under section Template:Codeline:

# Force Powermizer to a certain level at all times
# level 0x1=highest
# level 0x2=med
# level 0x3=lowest
# AC settings:
Option "RegistryDwords" "PowerMizerLevelAC=0x3"
# Battery settings:
Option	"RegistryDwords" "PowerMizerLevel=0x3"

Settings are better explained in NVIDIA Driver for X.org:Performance and Power Saving Hints.

Letting the GPU set its own performance level based on temperature

Add under section Template:Codeline:

Option "RegistryDwords" "PerfLevelSrc=0x3333"

Disable vblank interrupts (for laptops)

When running the interrupt detection utility Template:Codeline, it can be observed that the Nvidia driver will generate an interrupt for every vblank. To disable, place in the Template:Codeline section:

Option "OnDemandVBlankInterrupts" "True"

This will reduce interrupts to about one or two per second.

Enabling overclocking

To enable overclocking, place the following line in the Template:Codeline section:

Option "Coolbits" "1"

This will enable on the fly overclocking by running nvidia-settings inside X.

Warning: Please note that overclocking may damage hardware and that no responsibility may be placed on the authors of this page due to any damage to any information technology equipment from operating products out of specifications set by the manufacturer.

Enable screen rotation through XRandR

Place the following line in the Template:Codeline section:

Option "RandRRotation" "True"

After restarting Xorg, type:

xrandr -o left

The Screen should be rotated. To restore, type:

xrandr -o normal
Note: Editing xorg.conf may be unnecessary since screen rotation should be enabled by default, ideally by using the respective DE tools, such as SystemSettings in KDE.

Tips and tricks

Using TV-out

A good article on the subject can be found here

X with a TV (DFP) as the only display

The X server falls back to CRT-0 if no monitor is automatically detected. This can be a problem when using a DVI connected TV as the main display, and X is started while the TV is turned off or otherwise disconnected.

To force nvidia to use DFP, store a copy of the EDID somewhere in the filesystem so that X can parse the file instead of reading EDID from the TV/DFP.

To acquire the EDID, start nvidia-settings. It will show some information in tree format, ignore the rest of the settings for now and select the GPU (the corresponding entry should be titled "GPU-0" or similar), click the "DFP" section (again, "DFP-0" or similar), click on the "Acquire Edid" Button and store it somewhere, for example, Template:Filename.

Edit xorg.conf by adding to the "Device" section:

Option "ConnectedMonitor" "DFP"
Option "CustomEDID" "DFP-0:/etc/X11/dfp0.edid"

The "ConnectedMonitor" option forces the driver to recognize the DFP as if it were connected. The "CustomEDID" provides EDID data for the device, meaning that it will start up just as if the TV/DFP was connected during X the process.

This way, one can automatically start a display manager at boot time and still have a working and properly configured X screen by the time the TV gets powered on.

Displaying GPU temperature in the shell

Method 1 - nvidia-settings

Note: This method requires that you're using X. Use Method 2 or Method 3 if you are not. Also note that Method 3 currently does not not work with newer nvidia cards such as the G210/220 as well as embedded GPUs such as the Zotac IONITX's 8800GS.

To display the GPU temp in the shell, use nvidia-settings as follows:

$ nvidia-settings -q gpucoretemp

This will output something similar to the following:

Attribute 'GPUCoreTemp' (hostname:0.0): 41.
'GPUCoreTemp' is an integer attribute.
'GPUCoreTemp' is a read-only attribute.
'GPUCoreTemp' can use the following target types: X Screen, GPU.

The GPU temps of this board is 41 C.

In order to get just the temperature for use in utils such as rrdtool or conky, among others:

$ nvidia-settings -q gpucoretemp -t
41

Method 2 - nvidia-smi

Note: The application taught in this method is included with the standard nvidia driver package but as of 06-Jun-2010 is not included in the Arch packages. A [feature request] has been made to include it into the nvidia-utils package.

Use nvidia-smi which can read temps directly from the GPU without the need to use X at all. This is important for a small group of users who do not have X running on their boxes, perhaps because the box is headless running server apps. To display the GPU temp in the shell, use nvidia-smi as follows:

$ nvidia-smi -a

This should output something similar to the following:

==============NVSMI LOG==============


Timestamp			: Sun Jun  6 06:17:53 2010

GPU 0:
	Product Name		: ION
	PCI ID			: 87d10de
	Temperature		: 35 C

In order to get just the temperature for use in utils such as rrdtool or conky, among others:

$ nvidia-smi -a | grep Temp | cut -c17-18
35

Reference: http://www.question-defense.com/2010/03/22/gpu-linux-shell-temp-get-nvidia-gpu-temperatures-via-linux-cli

Method 3 - nvclock

Use nvclock which is available from the [extra] repo. Note that nvclock cannot access thermal sensors on newer nvidia cards such as the G210/220.

Troubleshooting

Gaming using Twinview

In case you want to play fullscreen games when using Twinview, you will notice that games recognize the two screens as being one big screen. While this is technically correct (the virtual X screen really is the size of your screens combined), you probably don't want to play on both screens at the same time.

To correct this behavior for SDL, try:

export SDL_VIDEO_FULLSCREEN_HEAD=1

For OpenGL, add the appropiate Metamodes to your xorg.conf in section Template:Codeline and restart X:

Option "Metamodes" "1680x1050,1680x1050; 1280x1024,1280x1024; 1680x1050,NULL;  1280x1024,NULL;"

Another method that may either work alone or in conjunction with those mentioned above is starting games in a seperate X server.

Old Xorg Settings

If upgrading from an old installation, please remove old Template:Filename paths as it can cause trouble during installation.

Corrupted screen: "Six screens" issue

For some users using Geforce GT 100M's, the screen turns out corrupted after X starts; divided into 6 sections with a resolution limited to 640x480.

To solve this problem, enable the Validation Mode Template:Codeline in section Template:Codeline:

Section "Device"
 ...
 Option "ModeValidation" "NoTotalSizeCheck"
 ...
EndSection

'/dev/nvidiactl' errors

Trying to start an opengl application might result in errors such as:

Error: Could not open /dev/nvidiactl because the permissions are too
restrictive. Please see the FREQUENTLY ASKED QUESTIONS 
section of /usr/share/doc/NVIDIA_GLX-1.0/README 
for steps to correct.

Solve by adding the appropiate user to the "video" group and relogin:

# gpasswd -a username video

32 bit applications do not start

Under 64 bit systems, installing Template:Codeline that corresponds to the same version installed for the 64 bit driver fixes the issue.

Errors after updating the kernel

Every time the kernel26 package is updated, it is a requirement to reinstall the nvidia package and rebooting afterwards.

Crashing in general

export IGNORE_CC_MISMATCH=1

More information about trouble-shooting the driver can be found in the NVIDIA forums.

Bad performance after installing a new driver version

If FPS have dropped in comparison with older drivers, first check if direct rendering is turned on:

glxinfo | grep direct

If the command prints:

direct rendering: No 

then that could be an indication for the sudden FPS drop.

A possible solution could be to regress to the previously installed driver version and rebooting afterwards.

Laptops: X hangs on login/out, worked around with Ctrl+Alt+Backspace

If while using the legacy nvidia drivers Xorg hangs on login and logout (particularly with an odd screen split into two black and white/gray pieces), but logging in is still possible via Ctrl-Alt-Backspace (or whatever the new "kill X" keybind is), try adding this in Template:Filename:

options nvidia NVreg_Mobile=1

One user had luck with this instead, but it makes performance drop significantly for others:

options nvidia NVreg_DeviceFileUID=0 NVreg_DeviceFileGID=33 NVreg_DeviceFileMode=0660 NVreg_SoftEDIDs=0 NVreg_Mobile=1

Note that Template:Codeline needs to be changed according to the laptop:

  • 1 for Dell laptops.
  • 2 for non-Compal Toshiba laptops.
  • 3 for other laptops.
  • 4 for Compal Toshiba laptops.
  • 5 for Gateway laptops.

See NVIDIA Driver's Readme:Appendix K for more information.

Refresh rate not detected properly by XRandR dependant utilities

The XRandR X extension is not presently aware of multiple display devices on a single X screen; it only sees the Template:Codeline bounding box, which may contain one or more actual modes. This means that if multiple MetaModes have the same bounding box, XRandR will not be able to distinguish between them.

In order to support Template:Codeline, the NVIDIA driver must make each MetaMode appear to be unique to XRandR. Presently, the Nvidia driver accomplishes this by using the refresh rate as a unique identifier.

Use Template:Codeline to query the actual refresh rate on each display device.

The XRandR extension is currently being redesigned by the X.Org community, so the refresh rate workaround may be removed at some point in the future.

This workaround can also be disabled by setting the "DynamicTwinView" X configuration option to "false", which will disable NV-CONTROL support for manipulating MetaModes, but will cause the XRandR and XF86VidMode visible refresh rate to be accurate.

External links